Posted at: 26 March
Research Engineer, Evaluations
Company
AssemblyAI is a remote B2B company specializing in AI-powered speech-to-text and audio intelligence solutions, headquartered in an unspecified location, serving a global market with advanced transcription and audio analysis capabilities.
Remote Hiring Policy:
AssemblyAI is a fully remote team, hiring from various locations to support a diverse workforce. Team members are encouraged to apply from different regions, fostering collaboration across time zones.
Job Type
Full-time
Allowed Applicant Locations
North America, South America
Salary
$210,000 to $260,000 per year
Job Description
Why AssemblyAI
AssemblyAI builds the best-in-class Voice AI models powering the next generation of voice applications. Our models serve 600M+ inference calls monthly, process 1M+ hours of audio daily, and power 2 billion+ end-user experiences. The Voice AI space is at an inflection point; we’re looking for folks truly excited to join a small team and help define the future of the industry.
We are one of the most capital-efficient AI companies on the planet - with under 100 people generating roughly $500K ARR per employee, we sit among the top 5 most revenue-dense teams within the fastest-growing AI companies today. That's not an accident; it's a deliberate choice to stay lean, move fast, and give every person on the team outsized ownership and impact. With thousands of customers including Granola, Fireflies, Figure AI, and CallRail, the company has real scale - processing over 2 million hours of audio daily and handling more than 1 million API calls every day. This is a rare growth-stage opportunity where the business is proven and the trajectory is steep, but the team is still small enough that your fingerprints are on everything.
If you've ever felt buried under layers of bureaucracy, starved of real ownership, or frustrated watching your work disappear into a slow-moving org, AssemblyAI is built differently. The company operates as a true meritocracy, with no heavy planning or approval processes and no gatekeeping on the tools or information you need. For anyone who genuinely cares about voice AI, not as a trend to chase, but as a technology to build, this is the place where the most interesting problems at the most interesting scale are being solved by a team small enough that you'll actually know everyone's name.
We’re committed to creating a space where our employees can bring their full selves to work and have equal opportunity to succeed. No matter your race, gender identity or expression, sexual orientation, religion, origin, ability, age, veteran status, if joining this mission speaks to you, we encourage you to apply!
About the Role
We are looking for a Senior Research Engineer to join our streaming speech-to-text research team—a new role that sits at the intersection of research, product, and engineering.
You'll be the person who makes sure we're measuring the right things, benchmarking against the right competitors, building and extending evaluation tooling and translating customer pain points into quantifiable research targets. You'll own the evaluation infrastructure that tells us whether our models are actually better—and by how much.
This role is ideal for someone with a Machine Learning / Research Engineering background who is obsessed with understanding what customers actually need, and who gets satisfaction from turning vague feedback ("the model feels slow") into concrete metrics that the whole team can align around. You're comfortable talking to customer-facing teams one hour, designing a new evaluation framework the next, and then convincing researchers why it matters.
You'll also operate at the frontier of the voice agent ecosystem. Our streaming product integrates with orchestration frameworks like LiveKit, Pipecat, and Vapi, and you'll need to understand how ASR fits into the broader voice agent stack—alongside VAD, turn detection, TTS, and LLM components. As this stack evolves rapidly, you'll help ensure our evaluations reflect real-world integration scenarios.
You'll work directly with our research and engineering teams and become the connective tissue between what customers need and what researchers build. If you're entrepreneurial, rigorous about measurement, and want to have an outsized impact on the success of a rapidly growing product, this is your role.
What You'll Do
Evaluation & Benchmarking
- Own end-to-end and integration-level model evaluation across accuracy, latency, and feature-specific metrics (e.g., turn detection latency, endpointing accuracy)
- Build and maintain competitive benchmarking pipelines against other providers in the market
- Design and run systematic experiments to measure the impact of model changes
Dataset & Test Set Management
- Onboard, curate, and maintain evaluation datasets—both public benchmarks and internal test sets
- Create evaluation subsets that stress-test specific capabilities and edge cases
Metric Development & Research Translation
- Define evaluation metrics that capture real-world performance
- Translate qualitative customer feedback into quantifiable evaluation criteria
- Work with customer-facing teams to understand pain points and convert them into research priorities
Research Velocity
- Reduce friction for researchers by maintaining clean evaluation pipelines and clear documentation
- Identify evaluation gaps proactively and propose solutions
- Move fast—iterate on benchmarking approaches weekly, not monthly
What You'll Need
- ML fundamentals: You understand how ML models are trained and evaluated well enough to interpret results and debug issues. You don't need to train them from scratch.
- Strong Python skills: You can write clean evaluation scripts, work with data pipelines, and are comfortable with SQL and cloud infrastructure.
- Metric intuition: You understand what makes a good evaluation metric, when to use relative vs. absolute improvements, and how to ensure statistical rigor.
- Voice agent stack familiarity: You understand how the components of a voice agent system interact—VAD, ASR, turn detection, LLM, TTS—and can reason about how changes in one affect the others.
- Tinkerer mentality: You'd rather ship something rough and iterate than spend weeks perfecting it. You're energized by variety.
- Communication skills: You can explain technical results to researchers, summarize findings for leadership, and translate customer feedback into requirements.
- Ownership mindset: You don't wait to be told what to evaluate. You see gaps and fill them.
- Will need to work at least 3-4 hours overlapping with Eastern US Time Zone
Nice to Have
- Experience with speech/audio ML or real-time systems
- Hands-on experience with voice agent orchestrators (LiveKit, Pipecat, Vapi, or similar)
- Familiarity with standard ML evaluation practices and benchmarks
- Experience working with customer-facing or product teams
- Background in QA, data science, or applied ML roles
What Success Looks Like
First 30 days: You've onboarded to our evaluation infrastructure, run your first competitive benchmark, and identified one gap in how we measure model quality.
First 90 days: You own our competitive benchmarking process. Researchers come to you to understand how their changes affect real-world metrics. You've proposed a new metric for a capability we weren't measuring well.
First 6 months: You're the go-to person for "how do we know if this is actually better?" You've built relationships with customer-facing teams. Your work directly influences which research directions we prioritize. You maintain benchmarks that reflect both standalone ASR quality and integrated voice agent performance.
Pay Transparency:
AssemblyAI strives to recruit and retain exceptional talent from diverse backgrounds while ensuring pay equity for our team. Our salary ranges are based on paying competitively for our size, stage, and industry, and are one part of many compensation, benefit, and other reward opportunities we provide.
There are many factors that go into salary determinations, including relevant experience, skill level, qualifications assessed during the interview process, and maintaining internal equity with peers on the team. The range shared below is a general expectation for the function as posted, but we are also open to considering candidates who may be more or less experienced than outlined in the job description. In this case, we will communicate any updates in the expected salary range.
The provided range is the expected salary for candidates in the U.S. Outside of those regions, there may be a change in the range which will be communicated to candidates throughout the interview process.
Salary range: $210,000 - $260,000
AI to Interview:
If you’re selected for an interview, please review this resource to better understand how AssemblyAI approaches the use of AI in our interview process.
GDPR privacy notice:
Candidates from the EU should review this job applicant privacy notice before applying.
Keep Exploring AssemblyAI:
Speech-to-text | Streaming speech-to-text | Speech Understanding | LLM Gateway
Try the Playground
Our $50M Series C fundraise
Check us out on YouTube!