Responsibilities
- What You’ll Do
- Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
- Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
- Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
- Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
Qualifications
- An Ideal Candidate Should Have
- Experience shipping Python-based services
- Experience being responsible for the successful operation of a critical production service
- Experience with public cloud environments, GCP preferred
- Experience with Infrastructure such as Code, Docker, and containerized deployments.
- Preferred: Experience deploying high-availability applications on Kubernetes.
- Preferred: Experience deploying ML models to production
What We Offer
- A dynamic environment where your contributions shape the company and its products
- A team that values innovation, intuition, and drive
- Autonomy, fostering focus and creativity
- The opportunity to have a significant impact in a revolutionary industry
- Competitive compensation, a welcoming atmosphere, and a commitment to an exceptional asynchronous work culture
- The privilege of working on a product that changes lives, particularly for those with learning differences like dyslexia, ADD, and more
- An active role at the intersection of artificial intelligence and audio – a rapidly evolving tech domain
Compensation
The United States base salary range for this full-time position is $140,000-$200,000 + bonus + equity depending on experience
How to Apply
Think you’re a good fit for this job?
Tell us more about yourself and why you’re interested in the role when you apply. And don’t forget to include links to your portfolio and LinkedIn.