AI Study Support
‍Support learners with Gen AI-powered personalized feedback and remedial content suggestions based on their class performance
Platform
Brightspace, a cloud-based learning management system (LMS) designed for both educational institutions and corporate training.

My Role
Led end-to-end design:  defined solution with stakeholders, research, delivered design, defined prompt strategy, and AI–human oversight workflows.
Team
1 Product manager
1 Dev manager
1 ML expert
5 Developers
1 Principal designer (report to)
Timeline
Nov 2024 - Jan 2025
đź—„ The Background
The Problem
Students tend to overestimate their quiz performance (a cognitive bias known as the Dunning–Kruger effect ), leaving learning gaps unaddressed.
The Goal
Provide learners with reliable, clear AI feedback after quizzes — seamlessly integrated into their workflow — so they can identify mistakes and review key concepts immediately.

Sneak Peek of the Solution

1 - Instructor set up the tool
Set up details about feedback style, suggested content sources and more
2 - Learner see feedback after taking quiz
Analyzes students' quiz data and provides personalized feedback + suggests content to review.
3 - Human in the loop
Set up details about feedback style, suggested content sources and more
âś… Outcome of the work

Overwhelming positive feedback

Other than the feedback from client and user research participants, we also see substantial increase of Lumi package adoption after launching Study Support (unable to share real data)
"I see a lot of positive comments from the teachers. We think this is very easy-to-use and easy to activate for new quiz... an excellent feature that what we want... we think it will be very capable to do that."
Client A
From existing Brightspace Lumi user
"I really like this idea actually. I don't use quizzes a ton, but when I do use it its usually early on to make sure student understand the topic of what we will discuss in class... so what you do here aligns well with things that I already do, but much nicer."
Participant 6
UX research
"I love this feature. I often give quizzes as intermediate check before a mid-term. I'll test them on that content in the mid-term...
This will give them great feedback on where they need to study more."
Participant 3
UX research
"when I write the mock exam for the students, I write out response that says if they got this wrong then suggest which module or lesson that they can go back to review, I think this is a good way of closing the loop so it doesn't feel so negative about things."
Participant 8
UX research
đź—„ The process
As the project is still in development. I can't share the design details.

Contact me if you want to learn more
🔬 Research - Uncovering the real learning problem
We conducted secondary research (literature review focused on learning science) to pinpoint the root issue.

‍Key finding: learners frequently rate their performance higher than reality, creating unseen knowledge gaps. This confirmed overconfidence as our design focus
🙍 Understanding user needs
As a B2B enterprise platform, Brightspace serves two distinct user groups:
- Business users – educators and administrators who configure and manage the tool
- End users – learners who receive personalized feedback
‍
To ensure the solution works across this dual audience, we also segmented learners into two key types: academic and corporate learners.
📍 Scoping for impact
After analyzing the primary learning journey, We scoped the design to the post-quiz moment — the prime time for feedback.

Studies show learners with immediate feedback can correct mistakes faster and achieve higher scores . By surfacing the AI panel right after quiz submission, we deliver help when it’s most useful, without disrupting the existing workflow.
⚙️ Mapping system constraints
I audited Brightspace’s quiz workflow, settings, and technical limits to guide a feasible design.
‍
Working with engineers, we documented constraints to ensure our solution fits the LMS architecture. This planning made our design scalable and implementable.
✨ Define reliable AI solution
AI outputs are a black box. Without control, they risk inaccuracy or misalignment with course goals.
‍
How we ensured reliability and control:
‍
- Outcome-aligned prompts: Guided AI feedback using course learning objectives.
- Human oversight: Instructors set boundaries, preview AI responses, and edit or remove feedback.
- Labeled data: Structured content around learning outcomes to help AI link mistakes to relevant material.
đź“‘ Testing and tuning AI output
I collaborated with engineers to test AI feedback at scale using real quiz questions, and refined the prompt through 6 iterations.
I ranked all conditions in order of how much they would affect the output and treated different use-case differently.

This tuning process ensured the AI suggestions stayed high-quality and useful as the system scaled.
🎨 Design the interface  
To integrate GenAI naturally into Brightspace, we tackled several design challenges:
‍
- Where should the feedback appear within the quiz workflow?
- How should we visually signal AI involvement without disrupting the UI?
- How do we build on top of existing components to add an “AI flavor” without overwhelming the user?
🤖 AI Prompt Design

How did we apporach
AI Prompt Design

Each of the product triages brings a special perspective to the table:
🎤 Designer (User and Pedagogy perspective)
What kind of output is BEST for the user?
What is most optional for education use?
🛠️ Prompt Engineer (Tech perspective)
How to write an effective prompt?
đź’° Product manager (Business perspective)
Which solution is most cost-friendly?

Introducing a
‍user-centred approach
to prompt design

The designer (myself) considered user needs, intentions, pedagogy, and ethics, and provided a guideline to the Machine learning experts that informs the ideal result for different scenarios.

As results were generated, I worked closely with the team to evaluate whether the output met our intent, and then iterated repeatedly until it did.
🎙️
The process is very well received
I also shared my experience with the entire design team and company-wide annual conference - INFUSION
Beyond project work

🎤 Becoming the voice of AI designer

Being a designer on the AI team. I am grateful to have first-hand experience working with this new technology - and now I am sharing my knowledge with others.
Somethings that I am doing
📋  Help to establish Design x AI approach
💡  Introduce new practices and AI tools to the team
👩🏻  Mentoring other designers over prompt design
🎤  Advocate for designers’ involvement (during company-wide conference)