AI Study Support
In 2024, I joined D2L’s brand-new AI team with one mandate “Improve learner achievement with GenAI.” With little AI experience and no direct access to students, we had to find clarity fast.
‍
The result was Study Support: an AI-powered feature that turns quiz performance into personalized, instructor-aligned study guidance.
My Role
As the Lead designer on this project, I led the end-to-end design of this feature in 3 months, established new interaction pattern for AI feature and guidelines for prompt design.
Team
1 Product manager
1 Dev manager
1 ML expert
5 Developers
1 Principal designer
Timeline
Nov 2024 - Jan 2025
The challenge that started everything
"Improve Learner Achievement using Gen AI Technology" and do it on double speed.
While everyone in ed-tech was racing to add AI features, most of our competitors like Canvas, Blackboard, and Moodle were building tools for instructors. Students? They were an afterthought.

We saw an opportunity, but there was a catch. Our team had minimal hands-on AI experience, and as a B2B company, we rarely had direct access to actual learners.
The ambiguity was overwhelming. Where do you even start when you're told to "use AI to help students" with no clear direction?
Navigating through the chaos
I brought clarity to the project's direction and scope through these key activities
1. Uncovering the real issue that affect learner achievement through research
I knew we couldn't just build another AI feature and hope it stuck. We needed to understand what actually impacts student achievement. Without budget for user research, our research team turned to academic literature - hundreds of learning science papers that revealed fascinating insights.
One finding stopped us in our tracks
📘 Dunning-Kruger Effect
Learner tend to be overconfident + have inaccurate judgements of their own knowledge .
2. Leveraging AI and LMS’s unique strength
While analyzing the competitive landscape, I realized something crucial.
Yes, there were tons of learner-facing AI tools out there, some of them so advanced that its hard for us to catch up.
‍
But they were all missing something fundamental: context about the student's actual performance and the instructor's specific content.
‍
I started mapping what AI does well against what our Learning Management System uniquely offers:
AI excels at analyzing vast amounts of information, understanding intent beyond keywords, and personalizing outputs. Meanwhile, our LMS houses all the instructor-created content, tracks every student interaction, and integrates with institutional learning outcomes.

The intersection was our sweet spot. We could build something no standalone AI tool could replicate — provide personalized feedback based on actual performance data and suggest instructor validated content that helps learner close that knowledge gap.
3. Align scoping with stakeholders to ensure realistic delivery
Before diving into solutions, I worked with stakeholders to establish clear boundaries. We couldn't rebuild the entire system in 3 months.
Leverage existing capabilities and tools (creation pages, display pages, list pages, etc.)  
Use existing Daylight Design system's UI and interaction patterns + Align with existing AI pattern
Don't make a new Activity type. Don't tie the feature to the Activity Model (on-going implementation)
New feature need to be compatible with all system settings and use cases of existing Activity.
4. Brainstorm workshop for quick and impactful ideas
Brightspace LMS is a huge platform. As a relatively new designer, it is difficult for me to get familiar with all of our offerings in 3 months, and we may miss valuable opportunities.

So we organized a brainstorm workshop with designers familiar with Brightspace's content creation offerings and tech nuances to generate ideas focused on solving the learner-facing problems learned from research.

At the end of the workshop, we have gathered 8 realistic, system-compatible ideas.
5 . System analysis to identify the right launch point -> Quiz tool 
As we are in a time crunch, instead of building an entirely new system, its more reasonable to “Stand on the shoulders of giants” - to leverage existing tools and workflows.

To identify the most suitable place to support learners, I conducted system analysis of all 10 activity types offered by the LMS and crossed-referenced the education core workflow to identify the most suitable launch point for this feature.
We identified the Quizzing Tool to be the best place.
‍
Why? It was the only auto-graded activity that generated immediate performance data. When a student completes a quiz, we instantly know what they understand and where they're struggling. That's the perfect moment to intervene with personalized support — right when they're most aware of their knowledge gaps
Further analyze the quiz tool by: 
‍
1 - Mapping instructor and student workflows and mental models.
2 - Examining quiz settings to identify technical constraints for AI study support.
3 - Identifying key touchpoints with other tools to enable AI nudges and transparency.
6. Crafting the solution, lots of exploration and iterations
Its important to know which design process to prioritize vs. deprioritize, especially when working under time pressure.

I focused on the ‘Understand and Define’ over ‘Explore’ phase to build a solid foundation, allowing me to move quickly without unnecessary iteration.
Refined Goal
The fog has lifted, and our path was clear
The Problem
Students tend to overestimate their quiz performance (a cognitive bias known as the Dunning–Kruger effect ), leaving learning gaps unaddressed.
The Goal
Leverage AI and LMS data to help students accurately recognize their knowledge gaps and receive timely, personalized guidance to close them
The solution
Study Support - AI-driven feedback and study materials suggestion based on quiz performance
Instructor
Taking control of AI feedback
IInstructors activate Study Support directly from the quiz edit page, where they can fine-tune exactly how the AI communicates with their students:

Define the feedback style and length that matches their teaching approach
‍Everyone complains that AI feedback sounds "too AI". My solution was surprisingly simple - let instructors feed the AI their own writing samples. That's how the "feedback reference" capability was born.
⭐️ Fun Fact
Instructor
Suggest remedial study material
Instructors decide whether study materials should come exclusively from their course content or include external resources like YouTube videos and articles.

When pulling from course content, we use Learning Outcome alignment and semantic analysis to match struggling students with the most relevant materials for each question they missed.
In academic settings, accuracy is everything. That's why most k-12 or higher ed instructors prefer course-only materials, as they've already vetted and validated every piece of content.
⭐️ Fun Fact
Learner
After completing a quiz, students receive personalized feedback that celebrates what they nailed and clearly explains where they need work.

They'll see carefully selected study materials (whether from the course or external sources) based on their specific performance gaps.
The original approach suggested 1 material per wrong answer. But that can easily overwhelm learners who have made a lot of mistakes.
We later refined the logic to combine suggestions and prioritize them by relevance to Learning Outcome
Now students get focused, actionable guidance instead of an avalanche of links.
⭐️ Fun Fact
Instructor
Instructor has full transparency over the AI output. Every piece of AI-generated feedback appears on their quiz grading page, where they can review what their students received.

They can rate the quality of recommended materials, and these ratings train the system to make better suggestions over time.
‍
We even built insight cards showing which content gets recommended most frequently and other actionable insights, all within the quiz context where instructors need it.
We debated whether instructors or learners should assess AI-generated output.
We chose instructors, since they know the content best and provide more credible feedback. Learner impact can instead be measured indirectly through subsequent quiz performance.
⭐️ Fun Fact
Challenge with AI Design
AI hallucinates. How do we ensure the reliability of AI-generated feedback?
Leverage Learning Outcome as our north star
A learning outcome is a clear statement of what a learner is expected to know, do, or demonstrate after a learning experience. Within the LMS, instructors can connect Learning Outcome tags to assessments and course materials to track learning progress.
‍
We leverages these tags to ensure AI only suggests relevant, instructor-validated content for underperformed assessments.

If no outcomes/too many are linked, we then use semantic analysis to analyze the quiz and content, ensuring the most relevant recommendations.
Only suggest external material from trusted sources
Instructor has full control over which source to include and exclude
Approved sources become the domain whitelist, and AI will only pull material from the selected sources.
Responsible Evaluation
When we started the project, auto evaluation tool such as AWS bedrock didn’t exist yet, so we relied on manual testing and evaluation of the result.

I worked closely with the machine learning expert and engineers to set up evaluation guidelines and test output against different kinds of quizzes. We worked through 7 rounds of iteration just to complete the prompt for feedback.
Human always in the loop
Every AI touchpoint includes clear indicators and explanations. Instructors always know when AI is being used, how it works, and what students will see. Humans remain in control.
Prompt Design
I advocated for a user-centred, collaborative approach to prompt design that's now adopted across all teams.
One of my proudest contributions happened behind the scenes. When I joined, engineers handled all prompt design — writing, testing, orchestrating API calls.
‍
Efficient? Yes. User-centered? Not quite.

I partnered with another designer to research best practices across teams, then proposed a new framework: prompt design as shared responsibility.

Product designers bring user context, engineers provide technical expertise, and product managers ensure business alignment. Together, we create more effective prompts.
This framework was so successful it was adopted company-wide and featured at D2L's annual INFUSION conference.
The Impact
The results spoke for themselves. Beyond the overwhelmingly positive feedback from clients and research participants, we saw substantial increases in our AI package adoption after launching Study Support.
"I see a lot of positive comments from the teachers. We think this is very easy-to-use and easy to activate for new quiz... an excellent feature that what we want... we think it will be very capable to do that."
Client A
From existing Brightspace Lumi user
"I really like this idea actually. I don't use quizzes a ton, but when I do use it its usually early on to make sure student understand the topic of what we will discuss in class... so what you do here aligns well with things that I already do, but much nicer."
Participant 6
UX research
"I love this feature. I often give quizzes as intermediate check before a mid-term. I'll test them on that content in the mid-term...
This will give them great feedback on where they need to study more."
Participant 3
UX research
"when I write the mock exam for the students, I write out response that says if they got this wrong then suggest which module or lesson that they can go back to review, I think this is a good way of closing the loop so it doesn't feel so negative about things."
Participant 8
UX research