AI-enhanced custom tasks: overview
Our website allows any user to set up a free account and carry out a custom task using Comparative Judgement. A custom task is a bespoke assessment that uses your own task and follows your own calendar. It’s the opposite of a national task, where we set the task and calendar. We have now enhanced custom tasks with all of our new AI features. AI-enhanced custom tasks are not available for free account users. They areFeaturedAI-enhanced custom tasks: set the prompt and criteria
In the settings box, you can enter your task title, any supporting information, and the criteria you would like the AI to use to assess the writing. Just paste the details into the relevant boxes. The AI will use the information in the criteria box to set the categories in the feedback reports. For more guidance on what to include in the prompt boxes, see here (https://help.nomoremarking.com/enFew readersAI-enhanced custom tasks: 90% judging
Once you have uploaded your scans and set the judging criteria, you can go to the "Run judging session" section of your task and press the button that says "Start Chloe Judging". Once you press this button, the AI will automatically complete 90% of your judgement quota. Currently the 90% ratio is fixed - you cannot adjust it. The AI-human slider that you see on the judging page doesn't apply tFew readersAI-enhanced custom tasks: pricing
AI-enhanced custom tasks are charged on a per submission basis. Each student in each assessment requires 1 submission token. This will provide you with 90% AI judging, full feedback reports, and AI transcription. YouFew readersAI-enhanced custom tasks: feedback reports
When you set up an AI-enhanced custom task, you will get our full suite of new AI feedback reports. You can read more about what is available on our main AI feedback page here. The feedback categories in the student and teacher reports are automatically created by the AI based on the information you submitted in the prompt and criteria settings box (https://help.nomoremarking.com/en/article/ai-enhanced-custom-tasks-set-thFew readersAI-enhanced custom tasks: how to set criteria
When considering using AI for judging student work, a common and valid concern arises: will the AI prioritize polished writing over factual accuracy? Could it reward "well-written nonsense" more than a less articulate but factually correct piece? The short answer is that a well-instructed AI can effectively evaluate both. The key is to provide the AI with clear, pedagogically sound assessment criteria. Here is some general advice based on this principle. 1. The AI Judges What You Tell It toFew readers