Mini Case Studies
Using AI to Improve Learning Design & Development
I use AI as a design partner--not a replacement-- to accelerate development, improve clarity, and create more engaging, learner-centered experiences. My approach combines AI efficiency with instructional design expertise and human judgment, ensuring that learning solutions are both scalable and grounded in real-world application.
A critical part of this process is strong collaboration with subject matter experts. Building relationships and drawing out their real-world insights ensures that AI-supported outputs remain accurate, relevant, and meaningful for learners.
In addition to the examples highlighted here, I’ve used AI to write learning objectives from existing content, translate complex legal content into plain language, draft assessment questions, build course evaluation tools, do performance consulting, and synthesize content into clear, actionable takeaways.
Case Study 1: Turning SME Conversations into Scenarios
Problem: While updating a large, complex program, I regularly partnered with subject matter experts through working sessions, clarification meetings, and storytelling conversations. I’ve found that strong relationships with SMEs are critical--those conversations are where the most valuable, real-world insights surface.
SMEs shared rich examples, edge cases, and best practices verbally, but translating that into scalable, structured learning content, especially scenario-based training, was time-intensive and difficult to standardize.
AI Approach: I used AI as a synthesis tool to support this process. After SME sessions, I leveraged meeting transcripts to identify recurring themes, extract strong examples, and find patterns in decision-making and best practices.
From there, I used AI to generate draft scenarios and situational decision points based on those real conversations, which I could then refine and align to specific learning objectives. This allowed me to move more efficiently from raw discussion to structured, scenario-based learning.
My Role: I led the end-to-end process--facilitating conversations with SMEs, asking targeted questions to draw out meaningful examples, and building trust so they felt comfortable sharing real experiences.
I then interpreted and validated AI-generated outputs, ensuring accuracy, relevance, and alignment with both the content and the learner context. I shaped the final scenarios to reflect realistic decision points, appropriate complexity, and clear connections to job tasks.
Impact:
-
Created richer, more realistic scenario-based learning grounded in the real SME experience
-
Reduced time spent manually synthesizing and structuring qualitative input by 50%
-
Improved authenticity and relevance of training by preserving the nuance of real-world situations
Case Study 2: AI-Generated Evaluation Tool
Problem: While designing a train-the-trainer session with limited seat time, I wanted learners to continue practicing beyond the live session. I built in a peer-based “homework” component where facilitators would practice delivering content and provide feedback to one another.
To support this, I needed a structured, easy-to-use rubric that would help participants consistently evaluate peer performance. The rubric would need to serve a secondary purpose: building facilitator confidence in both giving and receiving feedback—an essential but often underdeveloped skill.
AI Approach: I used AI to accelerate the creation of the rubric framework based on the content and goals of the session. By feeding in key elements of the training and facilitation best practices, I generated a draft rubric with clear criteria and performance levels.
This provided a strong starting point that I could refine into a practical tool learners could use independently, even without facilitator oversight.
My Role: I defined the evaluation criteria based on the session's objectives and what effective facilitation looked like in practice. I refined the AI-generated rubric, ensuring scoring levels were clear, realistic, and aligned with delivery expectations set by the client.
I also structured the rubric to be intuitive and actionable, so participants could confidently use it to guide peer feedback and self-reflection outside of the formal training environment.
Impact:
-
Created a standardized, easy-to-use evaluation tool for facilitator practice
-
Enabled scalable, self-directed learning beyond the live session
-
Improved quality and consistency of peer feedback among facilitators
See a partial example of the evaluation tool, below.
