Articles on: GDPR

AI FAQs

We try to consider potential biases with the AIs generated content. We need to understand the samples used in training these models. If the model has been trained in America, there's a chance the outputs may reflect biases due to different social norms compared to the UK. How do we identify and address these?

We use two methods to monitor for bias. Firstly, we consider correlational data that looks at the consistency of pupil scores over time in comparison to scores derived entirely from human judging. Secondly, we ask teachers to make a small number of decisions so that we can monitor how the decisions made by teachers reflect the decisions made by the AI. Analysis of these data ensures we can monitor bias.

How will you be monitoring the Ai to ensure the outputs are accurate and correct? What communication is there for staff to ensure they consider this in the results?

We have created an AI disagreements report for staff to ensure they consider the accuracy of the AI decisions.

Updated on: 09/04/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!