LiA (Large Interview Assistant) started as a weekend project to help anxious candidates like Khushi, a recent data science grad who would blank out the moment an interview began. We wanted something smarter than a list of generic questions—an AI teammate that could rehearse with you, adapt to your goals, and show you where you were improving. That became the north star for LiA: turn intimidating interview practice into a guided, feedback-rich routine you actually look forward to.
Why Build LiA?
Every member of our team—Yucheng, Khushi, Ben, and I—has coached or mentored people breaking into tech. The same obstacles kept coming up: finding realistic practice questions, gauging if an answer was “good enough,” and getting honest feedback on delivery. We decided to stitch those needs together into one product. LiA combines a question bank that orients to a candidate’s background, storytelling coaching that demonstrates what “great” looks like, and rich telemetry on voice and facial cues so users can see how they’re coming across in real time.
How the System Works
Under the hood LiA uses a set of collaborating LLM agents. The question generator takes in résumé highlights, role ambitions, and industry targets, then prompts Gemini 1.5 Pro to create scenarios that feel tailored. After each prompt, the expert agent shows a model answer—sometimes using retrieval to pull precise wins from your past, sometimes reasoning step-by-step with STAR, depending on what the question demands. Finally, the evaluation agent grades your reply against a rubric we co-designed with friends at big tech companies. Instead of a vague “good job,” you get a score plus the rationale behind it and specific fixes to try next round.
We wanted LiA to listen as well as it talks, so we layered in multimodal analytics. The voice sensor checks pace, filler words, and confidence signals; the facial expression tracker watches for eye contact, smiles, and engagement. All of that feeds a live dashboard during and after a session so you can correlate what you said with how you said it.
The Experience We Designed
LiA feels more like joining a video call than filling out a worksheet. The React front-end walks you through a warm-up, interview simulation, and feedback debrief. You pick the role and difficulty, the agents take it from there. During the session you see a confidence meter pulsing with your vocal and facial signals; afterwards you get a written summary, your rubric scores, and suggestions on what to practice next. It’s a loop that reinforces progress: rehearse, review, iterate.
What’s Next
We’re continuing to expand the corpus—more industries, more question styles, more examples sourced from real interviewers. The audio pipeline still has room to grow, especially in isolating tone versus content. We also want to let users ask LiA for company-specific prep so it can surface the right patterns for, say, a Google ML interview or a fintech analytics role. And as more people practice with LiA, we’ll tune the evaluator with new human scoring data so feedback stays sharp and fair.
Tech Stack & Team
LiA runs on a React front-end, Flask orchestration layer, LangChain-powered agents, and Gemini 1.5 Pro for generation. Multimodal insights are powered by custom audio pipelines and computer vision models. Everything is deployed on Google Cloud Run so sessions scale automatically on Demo Day spikes. The four of us — Yucheng Fang, Khushi Ranganatha, myself, and Ben Thiele — split responsibilities across research, engineering, product, and user testing with a shared goal: make interview practice feel like a conversation, not a chore.