If AI can now produce competent answers in seconds, what exactly are we assessing in our degrees? AI is already embedded in how students learn, think, and produce work. So, the question is no longer about its use. Rather, the real question is whether assessment is designed to treat AI as a liability to be controlled or as a resource to be used well. AI-integrated assessment does not mean looking the other way when students use AI. It means designing tasks where AI use is expected, visible, and evaluated. The shift is subtle but fundamental: from policing outputs to assessing judgment. Several practical design principles follow. First, assess decisions rather than artefacts. In an AI-rich environment, polished outputs are cheap. What remains scarce is the ability to frame problems well, choose appropriate tools, test assumptions, and decide when not to trust an AI response. Assessment can require students to justify how AI was used, why particular prompts were chosen, and how outputs were validated against disciplinary knowledge. Second, make the process evidence assessable. Short AI logs, annotated iterations, or structured commentaries can document how thinking evolved through interaction with AI. This is forensic reasoning about choices made, alternatives rejected, and risks managed. Used well, it turns AI from a shortcut into a cognitive amplifier. Third, build in authentic constraints. In professional settings, AI is used within limits, including ethical rules, organisational policies, incomplete data, and reputational risk. Assessment can simulate these conditions through ambiguous briefs, imperfect datasets, or explicit governance boundaries. Students are evaluated on how they navigate trade-offs, not how elegant the final output appears. Fourth, reintroduce dialogue selectively. Ask for recorded walkthroughs or live critiques, which allow students to explain how AI shaped their reasoning. The purpose is not detection but sense-checking judgment. Weak understanding surfaces quickly when students must articulate why they trusted or rejected an AI-generated insight. Finally, reward responsible AI use explicitly. Rubrics should recognise transparency, validation, ethical awareness, and the integration of AI output with human judgement. When expectations are clear, students learn how to use AI well rather than how to hide it. This approach develops genuinely transferable skills such as judgment under uncertainty, learning agility, ethical reasoning, and accountability. It prepares students for workplace realities where AI is normal, governed, and consequential. It fosters better feedback and stronger academic relationships by shifting conversations from suspicion to reasoned discussion. The irony is that AI-integrated assessment is not easier. It is harder. It raises the bar. We need to shift our thinking from compliance to using assessment to develop graduates who not only know how to use AI, but also when, why, and to what effect.
Authentic Assessment Practices
Explore top LinkedIn content from expert professionals.
Summary
Authentic assessment practices focus on evaluating students' abilities through real-world tasks, creative projects, and reasoning, rather than simply testing memorization or recall. These approaches help students demonstrate practical knowledge, critical thinking, and judgment—skills essential for success in the workplace and everyday life.
- Design real scenarios: Use case studies, industry challenges, or context-rich assignments that encourage students to apply what they've learned to solve practical problems.
- Prioritize process evidence: Ask students to document their decisions, explain their reasoning, and reflect on constraints or trade-offs, making their learning journey visible.
- Embrace diverse formats: Incorporate oral presentations, group work, and creative outputs like videos or portfolios to capture a broader range of skills and engage students with authentic tasks.
-
-
Strategies for AI-Resilient Assessments (AI & Assessments) Over time, through professional development, collaboration, and reflection, I have been exploring what it truly means to design AI-resilient assessments, those that prioritize authentic learning, creativity, and human judgment. Through this exploration, I have identified a set of practical strategies that help ensure assessments remain meaningful and resistant to overreliance on AI tools. Here's a list of these strategies: 💎Case-Based Analysis: Provide students with unique, context-rich scenarios that require them to apply course concepts, analyze data, and propose tailored solutions. 💎Personalized Reflections: Invite students to connect theoretical concepts to their own lived experiences, learning journeys, or local contexts, aspects that AI cannot authentically replicate. 💎Project-Based Assignments: Design multi-step projects that involve planning, iteration, and self-assessment across multiple drafts and revisions. 💎Oral Presentations & Defenses: Require students to explain their reasoning verbally or respond to questions in real time, fostering live, authentic dialogue. 💎Creative Products: Encourage students to produce multimedia, design, or creative outputs, such as prototypes, simulations, or artistic works, to demonstrate their understanding in diverse ways. 💎Collaborative Work: Structure group activities that depend on negotiation, clear role assignment, and peer accountability to achieve shared goals. 💎Portfolios of Work: Ask students to compile portfolios that document their growth over time through reflections, challenges, and learning milestones. 💎Scenario-Based Problem Solving: Present open-ended or ethical dilemmas that require students to synthesize knowledge and engage in creative reasoning. 💎Stepwise Problem Tasks: Require students to show the reasoning or calculations behind each step of their work, rather than only providing the final answer. 💎Peer Teaching Assignments: Have students teach a concept, design instructional materials, or lead short lessons to deepen their understanding and mastery of the subject. And here's the revision added to the list by Heliya Ahmadi a few days later: 💎Futures-Oriented & Speculative Design Assignments: Engage students in future-oriented or speculative thinking exercises that challenge them to imagine emerging scenarios, critically evaluate the evolving role of AI, and explore new forms of agency, authorship, and ethical decision-making. You can find the revised diagram under Heliya's comment in the comment section. 🤓🙏 Reflect & share: How are you rethinking your assessment designs in light of AI’s growing presence in education? #AIinEducation #AssessmentDesign #HigherEdInnovation #InstructionalDesign #TeachingWithAI #AuthenticAssessment #LearningDesign #FacultyDevelopment #EdTech #Pedagogy #AIResilience #FutureOfLearning #EducationInnovation #StudentEngagement #AIandTeaching #DigitalPedagogy
-
If we test for recall, we’ll get memorization. If we test for understanding, we’ll get reflection. If we test for problem-solving, we’ll get creativity and initiative. And yet, most of higher education still rewards what’s easy to measure, not what’s worth measuring. Our systems celebrate students who can reproduce information under pressure, rather than those who can apply knowledge in uncertain, real-world contexts. It’s time for assessment to evolve from memory validation to capability demonstration. That means: + Replacing high-stakes exams with project-based and portfolio-driven evaluations. + Integrating industry-validated assessments where professionals co-assess student work against workplace standards. + Treating assessment as a learning experience, not just an audit. When students know they’ll be evaluated on their ability to build, apply, and present and not just recall - their entire approach to learning changes. They start thinking more deeply, collaborating more intentionally, and taking ownership of outcomes. This shift isn’t about making education easier. It’s about making it more authentic. The best graduates are not those who remember the most, but those who can do the most with what they’ve learned. If our goal is to produce professionals who are ready on Day One, then assessment must finally match the outcome we claim to value. #linkedin #education
-
This guide from the University of Melbourne discusses adapting assessment strategies in academic settings due to the challenges posed by AI-generated text, focusing on practical strategies for assessment design to ensure integrity and enhance learning: The authors suggest: 1. Shifting Emphasis from Assessing Product to Assessing Process: Encourages assessing the learning journey rather than just the end product. For example, using platforms like Cadmus to track and evaluate students' progress on assignments provides insights into their learning processes. 2. Incorporating Tasks that Require Evaluative Judgement: Involves tasks where students review or evaluate work against a set of criteria, fostering critical thinking. An example is peer review, where students assess each other's work and reflect on feedback received to improve their own submissions. 3. Designing Nested or Staged Assessments: Breaks down a large task into smaller, interconnected tasks, allowing for ongoing feedback and development (e.g. a semester-long project broken into stages, such as initial research, draft submission, and final presentation, with each stage building upon the previous one). 4. Diversifying Assessment Formats: Expands the types of assessment beyond traditional essays and reports to include videos, podcasts, and other multimedia formats. This approach can enhance creativity and cater to diverse learning styles. For instance, students might create a podcast discussing a topic or a video presentation summarising their research findings. 5. Incorporating More Authentic, Context-Specific, or Personal Assignments: Makes assessments more relevant to real-world scenarios or personal experiences, which can increase student engagement and reduce the temptation to misuse AI. An example could be analysing a local case study or applying theories to personal experiences relevant to the subject matter. 6. Including More In-Class and Group Assignments: Facilitates collaboration and learning from peers, while also making it harder for students to rely on AI tools. This might involve group discussions, projects, or in-class presentations on assigned topics. 7. Incorporating Oral Interviews to Test Understanding or Application of Knowledge: Requires students to verbally articulate their understanding or reasoning in response to prompts, making it difficult for AI to assist. Examples include scenario-based interviews or explaining procedures and safety protocols in practical subjects. https://lnkd.in/g2t-dDCM
-
Follow Up post to answer “How?” STEM / CTE Assessment Isn’t About the Product — Here’s What It Looks Like in Practice In STEM and CTE, we often grade what students build. But the most meaningful assessment happens around the build. Here are real ways we assess thinking instead of the artifact: 🔹 Design Rationale Check (before building) Students submit or explain: “This material was chosen because…” “We predicted this would fail if…” → Assessed: reasoning, use of content knowledge, planning — not success. 🔹 Testing Data Explanation (after testing) Instead of “Did it work?” students answer: “Our data shows ___, which suggests ___ because ___.” → Assessed: data interpretation, cause-and-effect thinking. 🔹 Constraint Reflection Students identify: “The biggest constraint we faced was ___, so we decided to ___.” → Assessed: problem framing, decision-making under limits. 🔹 Revision Without Rebuilding Students respond: “If we had one more iteration, we would change ___ because ___.” → Assessed: learning from failure, transfer of understanding. 🔹 Trade-Off Analysis Students explain: “This solution improved ___ but reduced ___.” → Assessed: systems thinking, no single right answer. 🔹 Peer Defense Students defend a design choice to another team using evidence. → Assessed: communication, justification, professional practice. A project can fail and still demonstrate high-level learning. A polished product with weak reasoning should not score high. This is how learning becomes visible. This is how rigor becomes honest. This is how STEM and CTE reflect real work. Assessment isn’t about what students make. It’s about what they understand and can explain. #STEMeducation #CTE #AssessmentForLearning #ProjectBasedLearning #EngineeringDesign #AuthenticAssessment #STEMLeadership
-
Around two decades ago, a South Indian state conducted a written test on ‘cleanliness consciousness’ and reported that most students had done very well in the test. However, schools continued to remain untidy and dirty. This is only a stark example of a common problem with #assessment as we implement it - that is, it is possible to ‘demonstrate competence’ even when you don’t have it inside of you. It is not about cheating or using unfair means. You could score well in an ethics exam and be unethical in life. You could do very well in science board exams and continue to be superstitious, non-logical and even anti-science in daily life. (We all know someone like this!) You could be a topper in literature or social sciences without being a sensitive person or being able to see others’ point of view or accepting of groups very different from your own. That is because we have figured out what we have to present in order to ‘demonstrate competence’. We can acquire all this superficially from exam guides or textbooks or lectures, distil it into ‘important points’ and develop the skill to present these - without internalising or imbibing it in the real sense. What is needed is to shift from asking students to demonstrate competence to actually having it. For them to put into practice the principles of cleanliness and environmental conservation in the classroom, school, home and neighbourhood, on an ongoing basis (not just on the day when it is assessed!). To be tasked with working together to solve problems or perform actions and practice daily behaviour that require/reflect scientific thinking, or being creative or diversity-oriented or collaborative or data-oriented or a real practitioner or advocate of theories learnt and content consumed. What we are looking for is if there is a shift or evolution in the world view and beliefs, and in what has been internalised - and if this reflects in the ordinary day-to-day behaviour and responses of our students. Ultimately, our assessment should reveal not merely that students can SHOW learning but that they actually HAVE it. Only then can our education claim to have succeeded.
-
Last week, a colleague asked: "How can I assess student writing when I don't know if they wrote it themselves?" My response: "What if they defined the assessment criteria themselves?" This semester, I've experimented with student-defined outcomes for major projects. Rather than providing a standard rubric, I've asked students to develop their own success criteria within broad learning goals. The results have transformed not just assessment, but the entire student relationship with AI tools. Maya*, the student developing a denim brand market study, created assessment categories that included "market insight originality," "data visualization effectiveness," and "authentic brand voice development." These self-defined criteria became guiding principles – and completely changed her approach to using AI. "I catch myself asking better questions now," she told me. "Instead of 'help me write this section,' I'm asking 'does this analysis seem original compared to standard market reports?'" This highlights the "assessment ownership effect" – when students help create the criteria for quality, they develop internal standards that guide both their work and their AI interactions. I've documented four key benefits of this co-created assessment approach: Metacognitive Development: Students must reflect on what constitutes quality Intrinsic Motivation: Self-defined standards create stronger investment Selective AI Usage: Students use AI more thoughtfully to meet specific quality dimensions Authentic Evaluation: Discussions shift from "did you do this yourself?" to "does this meet our standards?" When students merely follow teacher-defined rubrics, AI can become a tool for compliance. When they define quality themselves, AI becomes a thought partner in achieving standards they genuinely value. Implementing this approach means starting with broader learning outcomes and then guiding students to define specific success indicators. It requires trust that students, when given responsibility, will often exceed our expectations. What assignment might you reimagine by inviting students to co-create the assessment criteria? *Name changed #AssessmentInnovation #StudentAgency #AILiteracy #AuthenticLearning Pragmatic AI Solutions Alfonso Mendoza Jr., M.Ed. Polina Sapunova Sabrina Ramonov 🍄Thomas Hummel France Q. Hoang Pat Yongpradit Aman Kumar Mike Kentz Phillip Alcock
-
I would like to share a second AI in Ed SoTL article entitled, “Redesigning Assessment for the Generative AI Era: A Framework for Educators” by Khlaif, et al. (2025) (https://lnkd.in/eAeV6BxJ ). Khlaif and colleagues offer a timely and practical rethinking of assessment practices grounded in educational integrity, learner agency, and AI fluency. Their work proposes a multidimensional framework designed to ensure that assessment continues to reflect meaningful learning even when AI is involved at every stage. The authors argue that generative AI has fundamentally disrupted assessment by: - Making traditional recall tasks obsolete - Complicating academic integrity enforcement - Blurring lines between student work and AI contribution - Expanding students’ access to instant feedback and explanations Rather than focusing on catching misuse, Khlaif et al. advocate for: - Authentic, process-driven assessments - Metacognitive reflection on tool use - Evaluation of student + AI co-production - Assessment of higher-order thinking, not output alone Four Key Dimensions 1) Pedagogical Dimension. Assessment must align with active learning, inquiry, critical thinking, and student-centered design. 2) Ethical Dimension. Includes transparency, academic honesty, consent, bias awareness, and AI literacy. 3) Technological Dimension. Focuses on tool selection, AI capability analysis, and appropriate use boundaries. 4) Assessment Dimension. Calls for redesigned methods including: - performance-based tasks - iterative submissions - reflective writing - multimodal evidence - collaborative problem-solving - AI-augmented portfolios Educators are urged to: - Require students to document how they used AI - Compare drafts with and without AI assistance - Integrate oral defense, peer review, and process documentation - Blend human judgment with AI-supported analytics - Incentivize learning, not just product creation Rather than equating AI use with cheating, the authors propose a new definition: Integrity means honestly representing the relationship between human and AI contributions. This shift reframes assessment in terms of transparency, reflection, and ethical agency. Khlaif et al. make a compelling case that assessment, not content, is where AI will make the biggest impact on learning systems. If assessment fails to evolve: - learning outcomes become artificial - grades become meaningless - student agency weakens - equity gaps worsen If redesigned with AI in mind: - creativity expands - students build meta-AI literacy - authentic learning becomes visible - assessment becomes more human, not less Reference Khlaif, Z. N., Alkouk, W. A., Salama, N., & Abu Eideh, B. (2025). Redesigning assessments for AI-enhanced learning: A framework for educators in the generative AI era. Education Sciences, 15(2), 174.
-
🔥 Are Singapore’s assessments still training obedience over thinking? [https://lnkd.in/gxypJ3AZ] We keep saying Singapore’s education system now nurtures critical thinkers, problem solvers, lifelong learners. But if you look closely at what we actually measure, an uncomfortable truth remains: 👉 Our assessments still reward compliance far more than original thinking. 🎓 In Schools & Universities (PET) Students are encouraged to “think critically”, but graded on how well they reproduce model answers, align with marking rubrics, and avoid deviance. Even in Humanities or General Paper, “critical thinking” often becomes a format performance: ✅ Use the right structure ✅ Cite recognized examples ✅ Present a “balanced” argument ❌ But don’t challenge the premise too deeply The result? Structured mimicry, not genuine intellectual risk-taking. 💼 In Adult & Workplace Learning (CET) The same logic reappears - this time through paperwork. Our CET system, especially under WSQ, still reflects industrial-era priorities: * predefined competencies * rigid evidence requirements * standardized checklists * audit-first, learner-second processes Trainers and assessors privately admit what the data shows: auditability often trumps authenticity. Adults are graded on procedure adherence, not judgment, creativity, or strategic thinking - because compliance is easier to justify during audits. This is how we end up with obedience upskilling: certificates without capability confidence. 🧠 A Compliance-Based Meritocracy Across PET and CET, we’ve built an assessment culture where initiative is tolerated only within the lines. But in a world where AI can already produce rubric-fitting essays and solve predictable problems, human value now depends on non-obedient thinking - people who can frame novel problems, not just follow old templates. Singapore’s next education leap won’t come from new subjects or new slogans. It must come from a radical redesign of assessment, shifting: > from measuring correctness → to measuring judgment > from standardized responses → to original reasoning > from calibrating conformity → to rewarding courage ✅ So what would that redesign actually look like? Here are three systemic shifts we can realistically make: 1���⃣ Introduce 15–20% open-context, judgment-based tasks in all PET and CET programs within five years (These tasks assess reasoning quality, not format compliance.) 2️⃣ Move WSQ evidence collection toward portfolio-based authenticity, not checklist accumulation (Used internationally in vocational excellence systems.) 3️⃣ Create cross-institution assessment panels that evaluate real work, not paperwork (This reduces audit pressure on individual assessors and raises quality.) These are not radical ideas - they are simply overdue. 💬 What would it take for Singapore to build an assessment system that actually rewards intellectual courage? If you’ve tried breaking the mold, I’d love to hear your real-world experience.
-
EMI assessment is where discussions about academic literacy and proficiency seem to always end up — and that’s not accidental. In my British Council piece, I argue that an over-reliance on standardised English proficiency scores (e.g. IELTS/TOEFL) to judge EMI readiness tells us something limited at best and exclusionary at worst. This is because those scores capture general language ability, not the academic literacies students actually need to read, argue, write, and participate in disciplinary communities. Shifting assessment toward literacy including diagnostic tasks, writing samples, authentic disciplinary practices, and formative feedback, helps us understand how students learn with language rather than simply whether they meet a cutoff score. And it opens up more equitable, inclusive possibilities for EMI. I’d love to hear your thoughts! Should EMI assessment move beyond proficiency thresholds? How do you see academic literacy being assessed (or misassessed) in practice? https://lnkd.in/e7XTMT2p