Transforming an AI-powered interview preparation platform through comprehensive UX redesign—addressing critical usability gaps, reimagining human-AI interaction patterns, and achieving 4.43/5.0 user satisfaction across a diverse user base.
Interview Trainer is an AI-powered platform designed to help job seekers prepare for interviews through simulated practice sessions and personalized feedback. While the platform's AI capabilities were robust, users struggled with confusing navigation, unclear feedback mechanisms, and a lack of control over AI interactions. The experience felt opaque and overwhelming, leaving users uncertain about how to improve their interview skills effectively.
This project challenged me to bridge the gap between sophisticated AI technology and intuitive user experience—making the invisible visible and ensuring users feel empowered rather than automated.
🔒 This case study has been shared with due care taken to protect confidential information.
3+ months (Nov 2024 - Present)
Comprehensive UX audit, information architecture redesign, AI interaction patterns, feedback mechanisms, accessibility improvements, and cross-functional collaboration with AI/ML teams.
Identify and resolve critical usability gaps through comprehensive UX audit
Redesign information architecture for intuitive navigation and task completion
Create transparent, controllable AI interaction patterns using human-in-the-loop workflows
Ensure accessibility compliance for inclusive user experience
Balance AI automation with user agency and explainability
Achieve measurable improvement in user satisfaction and platform effectiveness
UX Designer
Figma, Adobe Illustrator, Miro, MS Excel
Conducted comprehensive UX audit evaluating information architecture, navigation patterns, and accessibility
Provided strategic, design-centered recommendations addressing identified gaps
Redesigned AI feedback loops and interaction patterns with focus on transparency and user control
Designed human-in-the-loop workflows balancing automation with explainability
Created wireframes and high-fidelity prototypes for new features and improvements
Collaborated with AI engineers, data scientists, UI developers, and backend teams to ensure feasibility
Conducted usability testing and rapid validation sessions with 50+ users
Iterated designs based on user feedback and testing insights
Monitored user satisfaction metrics and informed continuous product improvements
Interview Trainer's AI engine could analyze responses and provide feedback, but users were struggling to navigate the platform, understand AI-generated insights, and feel confident in their preparation progress. The "black box" nature of AI interactions left users questioning the feedback they received, while navigation issues prevented them from efficiently accessing key features.
Critical issues included:
Fragmented information architecture making it difficult to find and complete tasks
Opaque AI feedback that didn't explain reasoning or offer clear improvement paths
Lack of user control over AI interactions and customization
Accessibility barriers preventing inclusive usage
Unclear progress tracking and skill development visibility
So, our goal was to...
Transform Interview Trainer from a feature-rich but confusing platform into an intuitive, transparent AI-powered coach that empowers users to prepare for interviews with confidence and clarity.
Here’s how we got there...
Deep-dive evaluation
Conducted systematic audit of the entire platform, analyzing user flows, information architecture, navigation patterns, AI interactions, and accessibility compliance.
Critical gap identification
Documented specific pain points across:
Information architecture and content organization
Navigation complexity and task completion efficiency
AI feedback clarity and actionability
User control and customization options
Accessibility barriers and WCAG compliance gaps
Strategic roadmap
Delivered prioritized, design-centered recommendations with clear rationale, implementation complexity assessment, and expected impact—providing the product team with a actionable improvement plan.
Usability Audit for MVP 1
Usability Audit for MVP 2
Homepage with improved navigation
Streamlined content organization
Restructured the platform's information hierarchy to align with users' mental models and interview preparation workflows—moving from feature-centric to task-centric organization.
Intuitive navigation patterns
Simplified navigation by reducing cognitive load, grouping related functionality, and creating clear pathways to essential features like practice sessions, feedback review, and progress tracking.
Reduced friction
Eliminated unnecessary steps and decision points, allowing users to start practice sessions and access feedback more efficiently.
Full Screen
Live Chat
Transparent feedback mechanisms
Redesigned AI feedback presentation to explain not just what needs improvement but why and how—making AI reasoning visible and actionable.
User control and customization
Implemented controls allowing users to:
Adjust AI feedback detail levels (summary vs. comprehensive)
Customize interview difficulty and focus areas
Request re-analysis or alternative feedback perspectives
Save and compare feedback across sessions
Explainable AI patterns
Designed interaction patterns that surface AI confidence levels, reasoning chains, and data sources, building user trust through transparency rather than obscuring the AI's decision-making process.
Progressive disclosure
Balanced automation efficiency with user agency by showing AI suggestions while allowing users to override, customize, or request alternative approaches.
Structured improvement paths
Transformed vague AI feedback into clear, actionable improvement steps with specific examples and practice recommendations.
Contextual guidance
Integrated contextual tooltips and help throughout the feedback experience, explaining terminology, scoring criteria, and improvement strategies.
Interview Insights page
Visual progress indicators
Designed comprehensive progress tracking showing skill development over time, strengths and weaknesses visualization, and achievement milestones.
Feedback comparison
Enabled users to compare feedback across multiple practice sessions, identifying patterns and tracking improvement in specific skill areas.
Live training Insights
WCAG compliance improvements
Enhanced color contrast ratios across interface
Improved keyboard navigation and focus indicators
Added descriptive alt text and ARIA labels
Ensured screen reader compatibility
Designed for scalable zoom and text resizing
Cognitive accessibility
Reduced cognitive load through clear language, consistent patterns, and progressive complexity, making the platform accessible to users with varying levels of technical comfort.
Rapid prototyping cycles
Created testable prototypes for key workflows, enabling quick validation before full implementation.
Usability testing with 50+ users
Conducted structured testing sessions across diverse user segments, gathering both quantitative metrics and qualitative insights.
Continuous refinement
Iterated designs based on testing findings, addressing friction points and optimizing flows for improved satisfaction.
Measurable success
Achieved 4.43/5.0 user satisfaction score across tested improvements—validating design decisions and identifying areas for continued enhancement.
Wireframes options to finalized VD
User Survey
AI Interaction
Transparent feedback with reasoning
User control over AI behavior
Explainable recommendations
Customizable feedback depth
Feedback & Progress
Actionable improvement steps
Visual skill tracking
Session comparison tools
Contextual guidance
Navigation & IA
Task-oriented organization
Streamlined workflows
Reduced friction paths
Clear feature discovery
Accessibility
WCAG compliance
Keyboard navigation
Screen reader support
Cognitive load reduction
Systematic evaluation
Conducted heuristic evaluation, cognitive walkthrough, and accessibility audit to build comprehensive understanding of platform issues.
User research synthesis
Analyzed existing user feedback, support tickets, and usage analytics to identify patterns and prioritize problem areas.
Competitive analysis
Studied similar AI-powered learning platforms to understand best practices in AI transparency, feedback design, and progress tracking.
Seamless handoff
Worked closely with UI developers and backend engineers to ensure designs translated effectively into implementation, providing detailed specifications and supporting development throughout.
Component collaboration
Partnered with frontend team to create reusable components that maintained design consistency while optimizing development efficiency.
The Challenge
Bridging the gap between AI capabilities and user experience requirements—ensuring designs were both optimal for users and technically feasible given AI model constraints.
The Approach
Established regular working sessions with AI engineers and data scientists to:
Understand AI model capabilities and limitations
Design feedback that accurately represented AI confidence and reasoning
Create human-in-the-loop patterns that improved model training
Balance automation efficiency with user control needs
The Result
Designed AI interactions that felt transparent and controllable while actually improving model performance through better user feedback loops.
4.43/5.0 user satisfaction score across 50+ users for redesigned features
Reduced navigation friction and task completion time
Increased user confidence in AI feedback through transparency
Improved accessibility enabling broader user base to benefit from platform
Designing for AI Transparency
The biggest insight was that users don't want AI to be invisible—they want to understand it. By making AI reasoning visible and controllable rather than hiding it for "simplicity," we actually increased both trust and satisfaction.
Human-in-the-Loop as Design Philosophy
Treating users as collaborators with AI rather than passive recipients of AI output fundamentally changed interaction patterns. Giving users control over automation levels created flexibility that accommodated different learning styles and comfort levels.
Audit-Driven Design
Starting with a comprehensive audit provided invaluable foundation for prioritization. Rather than addressing symptoms, we could trace issues to root causes in information architecture and interaction patterns.
Cross-Functional Translation
Working at the intersection of UX, AI engineering, and development required becoming a translator—understanding technical constraints well enough to design within them while advocating for user needs effectively.
Strategic roadmap for continued improvement based on audit findings
Validated design approach through measurable satisfaction improvements
Created reusable patterns for AI interaction applicable across platform
Improved model performance through better human-in-the-loop feedback
Positioned platform for scaling with clearer architecture and navigation
The Interview Trainer redesign demonstrates that AI-powered products succeed not by hiding complexity but by making it understandable and controllable. By treating AI as a collaborative coach rather than an opaque oracle, we've created a platform where users feel empowered to improve their interview skills with confidence in both the AI's guidance and their own agency in the learning process.
The 4.43/5.0 satisfaction score validates that transparency, control, and thoughtful information architecture are as critical to AI product success as the underlying technology—and continuous testing ensures we keep improving as both user needs and AI capabilities evolve.