Job Search and Career Advice Platform

Enable job alerts via email!

Evaluation Scenario Writer - AI Agent Testing Specialist

Mindrift

Remote

GBP 40,000 - 60,000

Part time

23 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading AI consultancy in the United Kingdom is looking for a part-time Entry Level role to design and evaluate test scenarios for AI agents. You will work remotely, creating structured evaluations that simulate human tasks and defining acceptable behaviors for agents. Ideal candidates should have a background in computer science and be comfortable with tools like Python and JSON. This role offers flexible hours and competitive pay based on experience.

Benefits

Flexible scheduling
Competitive pay up to $49/hour
Remote work

Qualifications

  • Good understanding of test design principles.
  • Comfortable with structured formats for scenario description.
  • Ready to learn new methods.

Responsibilities

  • Design structured test scenarios based on real-world tasks.
  • Define expected agent behaviors and scoring logic.
  • Review agent outputs and adapt tests accordingly.

Skills

Analytical mindset
Attention to detail
Strong written communication skills
Curiosity about AI
Ability to switch between tasks

Education

Bachelor's and/or Master's Degree in Computer Science or related fields

Tools

Python
JavaScript
JSON/YAML
Job description

This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.

At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.

What We Do

The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real‑world expertise from across the globe.

About The Role

We're looking for someone who can design realistic and structured evaluation scenarios for LLM‑based agents. You'll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You'll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You'll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.

  • Design structured test scenarios based on real‑world tasks
  • Define the golden path and acceptable agent behavior
  • Annotate task steps, expected outputs, and edge cases
  • Work with devs to test your scenarios and improve clarity
  • Review agent outputs and adapt tests accordingly
How To Get Started

Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone.

Requirements
  • Bachelor's and/or Master's Degree in Computer Science, Software Engineering, Data Science / Data Analytics, Artificial Intelligence / Machine Learning, Computational Linguistics / Natural Language Processing (NLP), Information Systems or other related fields.
  • Background in QA, software testing, data analysis, or NLP annotation
  • Good understanding of test design principles (e.g., reproducibility, coverage, edge cases)
  • Strong written communication skills in English
  • Comfortable with structured formats like JSON/YAML for scenario description
  • Can define expected agent behaviors (gold paths) and scoring logic
  • Basic experience with Python and JS
  • Curious and open to working with AI‑generated content, agent logs, and prompt‑based behavior
  • Ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines
  • Our freelance role is fully remote so you just need a laptop, internet connection, time available and enthusiasm to take on a challenge
Nice to Have
  • Experience in writing manual or automated test cases
  • Familiarity with LLM capabilities and typical failure modes
  • Understanding of scoring metrics (precision, recall, coverage, reward functions)
Benefits
  • Contribute on your own schedule, from anywhere in the world
  • Get paid for your expertise, with rates that can go up to $49/hour depending on your skills, experience, and project needs
  • Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments
  • Participate in an advanced AI project and gain valuable experience to enhance your portfolio
  • Influence how future AI models understand and communicate in your field of expertise

Seniority Level: Entry Level

Employment Type: Part‑time

Job Function: Other

Industries: IT Services and IT Consulting

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.