AI in Education

We have some general recommendations for things that lectures could consider when updating assessments.

Recommendations:
  1. All courses include a section on the relationship between the learning outcomes and the student use of tools including AI. Examples
    1. “Learning outcomes 2 requires students to learn the fundamentals of programming and so using AI to avoid learning the foundations will lead to problems with following courses.”
  2. Course coordinators should consider assessment which does not become trivial give current AI capability, For Example:
    1. Oral examination / presentation for verification of understanding, or assessment
      1. Windowed binary search for grade. List of 40 questions with assessed difficulty level. Start asking medium difficulty questions, and move up or down based on students ability to answer. Thus only 10 questions may be asked of an individual student but a grade can still be established.
      2. Validation of submission. Student present a summary of their submission and answer questions to validate they did the work. (could be a multiplier from 0-1 to estimate level of understanding of the work submitted)
    2. Focus on assessing skills not easily replicated by AI tools:
      1. Having an in-course activity and asking student to apply theory to that activity. Testing their ability to apply theory to situations.
      2. Currently analysing diagrams or creating diagrams is challenging for text-based AI
      3. Group work and collaboration.
      4. Testing details of specific cultural knowledge from a small community.
    3. Assessing process rather than product:
      1. Assess the process to produce an output, including potentially assessing their use of AI. This could assess the prompts that were used, how was the response edited or verified. “Prompt Craft” is the name for this sort of activity.
      2. Give students an example of ChatGPT output which is incorrect and ask them why it is incorrect and to verify the data that is correct.
    4. Assess understanding of code/text rather than generation of code/text. Thus, focusing on evaluation rather than creation:
      1. Provide students with multiple potential answers to a question and ask them to rank them.
      2. Ask students to provide an indication of how confident they are about the correctness of generated statements. Focus on evaluating the use of verified sources of information, and independent sources.
      3. Ask history style analysis of sources questions where human created content and AI content are treated as potentially flawed and biased. Ask for evaluation of the potential bias.
    5. Ensuring assessment aligns with the learning outcomes
      1. Re-evaluate the learning outcomes and associated assessment tasks and create tasks that test the outcome based on how that outcome is changed by access to AI.
    6. Asking for a position with references
      1. AIs are generally developed to have a neutral position. Asking student to reflect on their personal experiences and individual belief systems.
      2. Ask students to reflect on their learning in the course and how it relates to other courses they have taken.
      3. Asking them for references to NZ specific content and sources for information.