PrepAI is an AI-powered interview preparation app that converts a job description + resume into a personalized mock interview with real-time scoring and a coach-style final report.
It is designed for a hackathon-style demo: text-first reliability, voice as an optional enhancement, and simple, explicit contracts.
- Paste a job description and upload/paste your resume
- Get a Revise Mode topic list (top gaps/must-haves)
- Take a mock interview (2 questions in the current prompt config)
- Answer via:
- Voice (local Whisper STT)
- Text (always available)
- See per-question evaluation:
- Clarity / Correctness / Depth / Structure (0–5)
- Strengths, missing points, and a recommendation
- Get a final report:
- Overall grade and averages
- Strengths / areas to improve / revision plan
- Humanized by an LLM (same schema)
The system revolves around three core data contracts implemented in backend/schemas/:
ExtractedProfile(backend/schemas/profile.py): job requirements + candidate profile + gapsInterviewScript(backend/schemas/script.py): questions + ideal answer outline + rubricEvaluationResult(backend/schemas/evaluation.py): per-question scoring + feedback
The aggregated output is:
FinalReport(backend/schemas/report.py)
- UI: Gradio (multi-page flow with custom CSS) —
ui/gradio_app.py - Backend API: FastAPI —
backend/app.py - Orchestration: LangGraph —
backend/graph.py - LLM: OpenAI via LangChain —
backend/tools/llm_client.py - Voice (optional):
- STT: local Whisper (
openai-whisper) —backend/tools/stt_whisper.py - TTS: ElevenLabs —
backend/tools/elevenlabs_tts.py
- STT: local Whisper (
- Resume PDF extraction:
pypdf—ui/components.py
flowchart LR
UI[Gradio UI\nui/gradio_app.py] -->|"calls Python functions"| LG[LangGraph\nbackend/graph.py]
API[FastAPI\nbackend/app.py] --> LG
LG --> EX[Extractor Agent\nbackend/agents/extractor_agent.py]
LG --> SB[Script Builder Agent\nbackend/agents/script_builder_agent.py]
LG --> EV[Evaluator Agent\nbackend/agents/evaluator_agent.py]
LG --> CO[Coach Agent\nbackend/agents/coach_agent.py]
EX --> LLM[OpenAI via LangChain\nbackend/tools/llm_client.py]
SB --> LLM
EV --> LLM
CO --> LLM
UI -.optional.-> STT[Whisper STT\nbackend/tools/stt_whisper.py]
UI -.optional.-> TTS[ElevenLabs TTS\nbackend/tools/elevenlabs_tts.py]
Notes:
- The UI currently imports and calls
backend.graphdirectly (in-process). - FastAPI endpoints exist for external clients and testing. The included run script starts both.
backend/
app.py # FastAPI endpoints
graph.py # LangGraph state graphs (step-by-step + end-to-end)
agents/ # extractor / script builder / evaluator / coach
schemas/ # contracts (Pydantic)
tools/ # llm client, STT, TTS, similarity
ui/
gradio_app.py # Gradio UI (multi-page interview flow)
components.py # UI formatting + PDF text extraction
docs/
PrepAI_flowchart.svg
demo_script.md
scripts/
run_local.sh
git clone https://github.com/PRONGS-CHIRAG/InterviewPrep.git
cd InterviewPrep
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtcp .env.example .envFill values in .env. Do not paste keys into chat or commit them to git.
This repo already ignores .env via .gitignore.
OPENAI_API_KEYOPENAI_MODEL(default:gpt-4o-mini)
ELEVENLABS_API_KEYELEVENLABS_VOICE_ID(defaults to a safe voice id if blank)
Whisper runs locally; if you use voice STT, you may need ffmpeg installed on your system.
./scripts/run_local.shOpen the Gradio URL printed in your terminal (usually http://127.0.0.1:7860).
Backend:
uvicorn backend.app:app --reload --port 8000UI:
python -m ui.gradio_appSet in .env:
DEBUG_AGENT_OUTPUT=trueWhen enabled, these agents print their prompt and parsed JSON payload to the terminal:
extract_profile(backend/agents/extractor_agent.py)build_script(backend/agents/script_builder_agent.py)evaluate_answer(backend/agents/evaluator_agent.py)
FastAPI provides stable endpoints in backend/app.py:
GET /healthPOST /intake→IntakeRequest→ExtractedProfilePOST /script→ExtractedProfile→InterviewScriptPOST /evaluate→{question, answer}→EvaluationResultPOST /report→{evaluations}→FinalReportPOST /run_full→{intake, answers}→{profile, script, evaluations, report}
- Flowchart diagram:
docs/PrepAI_flowchart.svg - 2-minute demo script:
docs/demo_script.md - Deep technical overview:
overview.md - Project brief/spec:
Context.md
- Never commit
.envor share API keys publicly. - Resume and JD text are processed locally by the app; LLM calls send only the prompt content required for generation/evaluation.