Deepgram is the leading platform underpinning the emerging trillion-dollar Voice AI economy, providing real-time APIs for speech-to-text (STT), text-to-speech (TTS), and building production-grade voice agents at scale. More than 200,000 developers and 1,300+ organizations build voice offerings that are âPowered by Deepgramâ, including Twilio, Cloudflare, Sierra, Decagon, Vapi, Daily, Cresta, Granola, and Jack in the Box. Deepgramâs voice-native foundation models are accessed through cloud APIs or as self-hosted and on-premises software, with unmatched accuracy, low latency, and cost efficiency. Backed by a recent Series C led by leading global investors and strategic partners, Deepgram has processed over 50,000 years of audio and transcribed more than 1 trillion words. There is no organization in the world that understands voice better than Deepgram.
At Deepgram, we expect an AI-first mindsetâAI use and comfort arenât optional, theyâre core to how we operate, innovate, and measure performance.
Every team member who works at Deepgram is expected to actively use and experiment with advanced AI tools, and even build your own into your everyday work. We measure how effectively AI is applied to deliver results, and consistent, creative use of the latest AI capabilities is key to success here. Candidates should be comfortable adopting new models and modes quickly, integrating AI into their workflows, and continuously pushing the boundaries of what these technologies can do.
Additionally, we move at the pace of AI. Change is rapid, and you can expect your day-to-day work to evolve just as quickly. This may not be the right role if youâre not excited to experiment, adapt, think on your feet, and learn constantly, or if youâre seeking something highly prescriptive with a traditional 9-to-5.
Model Evaluation QA Lead,
This is a hands-on, high-impact role at the intersection of QA engineering and ML operations. You will design automated evaluation frameworks, integrate model quality gates into release pipelines, and drive industry-standard benchmarkingâensuring Deepgram maintains its position as the accuracy and latency leader in voice AI.
What Youâll Do
- Model Evaluation Automation:Design, build, and maintain automated model evaluation pipelines that run against every candidate model before release. Implement objective and subjective quality metrics (WER, SER, MOS, latency/throughput) across STT, TTS, and STS product lines.
- Release Gate Integration:Embed model quality checkpoints into CI/CD and release pipelines. Define pass/fail criteria, build dashboards for model comparison, and own the go/no-go signal for model promotions to production.
- Agent & Model Evaluation Frameworks:Stand up and operate evaluation tooling (Coval, Braintrust, Blue Jay, custom harnesses) for end-to-end voice agent testingâcovering accuracy, latency, turn-taking, and conversational quality and custom metrics across real-world scenarios.
- Active Learning & Data Ingestion Testing:Partner with the Active Learning team to validate data ingestion infrastructure, annotation pipelines, and retraining automation. Ensure data quality standards are met at every stage of the flywheel.
- Industry Benchmark Automation:Automate execution and reporting of industry-standard benchmarks (e.g., LibriSpeech, CommonVoice, internal production-traffic evals). Maintain reproducible benchmark environments and publish results for internal consumption.
- Language & Domain Validation:Build and maintain test suites for multi-language and domain-specific model validation. Design coverage matrices that ensure new languages and acoustic domains are systematically evaluated before GA.
- Retraining Automation Support:Validate the end-to-end retraining pipeline across all data sourcesâfrom data selection and preprocessing through training, evaluation, and promotionâensuring automation reliability and correctness.
- Manual Test Feedback Loop:Design and operate human-in-the-loop evaluation workflows for subjective quality assessment. Build the tooling and processes that translate human feedback into actionable quality signals for the ML team.
Model Evaluation Automation:
Release Gate Integration:
Agent & Model Evaluation Frameworks:
Active Learning & Data Ingestion Testing:
Industry Benchmark Automation:
Language & Domain Validation:
Retraining Automation Support:
Manual Test Feedback Loop:
Itâs Important To Us That You Have
- 4â7 years of experience in QA engineering, ML evaluation, or a related technical role with a focus on predictive and generative model and data quality.
- Hands-on experience building automated test/evaluation pipelines for ML models and connecting software features.
- Strong programming skills in Python; experience with ML evaluation libraries, data processing frameworks (Pandas, NumPy), and scripting for pipeline automation.
- Familiarity with speech/audio ML concepts: WER, SER, MOS, acoustic models, language models, or similar evaluation metrics.
- Experience with CI/CD integration for ML workflows (e.g., GitHub Actions, Jenkins, Argo, MLflow, or equivalent).
- Ability to design and maintain reproducible benchmark environments across multiple model versions and configurations.
- Strong communication skillsâyou can translate model quality metrics into actionable insights for engineering, research, and product stakeholders.
- Detail-oriented and systematic, with a bias toward automation over manual process.
4â7 years of experience in QA engineering, ML evaluation, or a related technical role with a focus on predictive and generative model and data quality.
Hands-on experience building automated test/evaluation pipelines for ML models and connecting software features.
Strong programming skills in Python; experience with ML evaluation libraries, data processing frameworks (Pandas, NumPy), and scripting for pipeline automation.
Familiarity with speech/audio ML concepts: WER, SER, MOS, acoustic models, language models, or similar evaluation metrics.
Experience with CI/CD integration for ML workflows (e.g., GitHub Actions, Jenkins, Argo, MLflow, or equivalent).
Ability to design and maintain reproducible benchmark environments across multiple model versions and configurations.
Strong communication skillsâyou can translate model quality metrics into actionable insights for engineering, research, and product stakeholders.
Itâd Be Nice If You Have
- Experience with model evaluation platforms (Coval, Braintrust, Weights & Biases, or custom evaluation harnesses).
- Background in speech recognition, NLP, or audio processing domains.
- Experience with distributed evaluation at scaleârunning evals across GPU clusters or large dataset partitions.
- Familiarity with human-in-the-loop evaluation design and annotation pipeline tooling.
- Experience with multi-language model evaluation and localization quality assurance.
- Prior work in a company where ML model quality directly impacted revenue or customer SLAs.
Experience with model evaluation platforms (Coval, Braintrust, Weights & Biases, or custom evaluation harnesses).
Background in speech recognition, NLP, or audio processing domains.
Experience with distributed evaluation at scaleârunning evals across GPU clusters or large dataset partitions.
Familiarity with human-in-the-loop evaluation design and annotation pipeline tooling.
Experience with multi-language model evaluation and localization quality assurance.
Prior work in a company where ML model quality directly impacted revenue or customer SLAs.
Why This Role Matters
Deepgramâs competitive advantage is built on model qualityâaccuracy, latency, and reliability across languages and domains. As Model Evaluation QA Lead, youâll be the person who ensures that advantage is measured, maintained, and continuously improved. Youâll build the evaluation infrastructure that gives our Research and Active Learning teams the confidence to ship faster while raising the quality bar with every release. This role directly protects customer trust and accelerates Deepgramâs ability to lead the voice AI market.
Holistic health
- Medical, dental, vision benefits
- Annual wellness stipend
- Mental health support
- Life, STD, LTD Income Insurance Plans
Medical, dental, vision benefits
Annual wellness stipend
Mental health support
Life, STD, LTD Income Insurance Plans
Work/life blend
- Unlimited PTO
- Generous paid parental leave
- Flexible schedule
- 12 Paid US company holidays
- Quarterly personal productivity stipend
- One-time stipend for home office upgrades
- 401(k) plan with company match
- Tax Savings Programs
Unlimited PTO
Generous paid parental leave
Flexible schedule
12 Paid US company holidays
Quarterly personal productivity stipend
One-time stipend for home office upgrades
401(k) plan with company match
Tax Savings Programs
Continuous learning
- Learning / Education stipend
- Participation in talks and conferences
- Employee Resource Groups
- AI enablement workshops / sessions
Learning / Education stipend
Participation in talks and conferences
Employee Resource Groups
AI enablement workshops / sessions
Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!
Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.
We are happy to provide accommodations for applicants who need them.
Compensation Range: $180K - $230K
