Job Search and Career Advice Platform

Enable job alerts via email!

Security Researcher

Darktrace Ltd

Cambridge

Hybrid

GBP 60,000 - 80,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A cybersecurity firm is seeking a Security Researcher to advance generative AI technologies. The role involves investigating AI security threats, designing defenses, and collaborating with product teams. Candidates should have a solid understanding of generative AI systems and strong analytical skills. The position offers a hybrid work model with two days a week in the Cambridge office and includes benefits like private medical insurance and enhanced family leave.

Benefits

23 days’ holiday + public holidays
Private medical insurance
Life insurance
Salary sacrifice pension scheme
Enhanced family leave

Qualifications

  • Solid understanding of generative AI systems and their security challenges.
  • Interest in collaborative problem-solving with detection engineering teams.
  • Ability to communicate with both technical and non-technical stakeholders.

Responsibilities

  • Investigate trends in generative AI compliance and visibility.
  • Research attacker tradecraft targeting generative AI systems.
  • Create, validate, and test detections in a research environment.
  • Coordinate with relevant development and machine learning teams.
  • Provide feedback on product performance.

Skills

Familiarity with generative AI
Knowledge of attacker methodologies
Strong logical reasoning
Problem-solving skills
Ability to communicate technical concepts
Job description
###Security Researcher page is loaded## Security Researcherlocations: Cambridge Officetime type: Full timeposted on: Posted Todayjob requisition id: JR100723Darktrace is a global leader in AI for cybersecurity that keeps organizations ahead of the changing threat landscape every day. Founded in 2013, Darktrace provides the essential cybersecurity platform protecting nearly 10,000 organizations from unknown threats using its proprietary AI.The Darktrace Active AI Security Platform delivers a proactive approach to cyber resilience to secure the business across the entire digital estate – from network to cloud to email. Breakthrough innovations from our R&D teams have resulted in over 200 patent applications filed. Darktrace’s platform and services are supported by over 2,400 employees around the world. To learn more, visit . **Job D****escription****:**As part of our cutting-edge research team, you will play a pivotal role in advancing the security and trustworthiness of generative AI technologies. This position offers the opportunity to explore emerging threats, design innovative defenses, and shape best practices for safe and responsible AI deployment. You’ll work at the intersection of machine learning, cybersecurity, and applied research, helping to ensure that next-generation AI systems are robust, secure, and aligned with ethical standards.This is a hybrid role, with a compulsory attendance of 2 days a week in the Cambridge office.**What will I be doing:**As a Security Researcher with a focus on generative AI systems, you will contribute to a range of projects ranging from rapid prototyping of new ideas to open-ended research initiatives. As a domain expert, you will provide initial insights and ongoing feedback supporting product development, communicating with the product, development, and machine learning teams as needed. Other responsibilities will include but not limited to:* Investigating trends in generative AI compliance and visibility* Researching attacker tradecraft targeting generative AI chatbots and agentic systems* Creating, validating, and testing detections in a research environment* Co-ordinating with relevant development, product, and machine learning teams* Provide detailed and actionable feedback on product performance**What experience do I need:**To succeed in this role, you should bring a solid understanding of generative AI systems and their security challenges, along with strong analytical and communication skills. You’ll be working closely with a detection engineering team, so an interest in collaborative problem-solving and a proactive approach to learning are essential.* Familiarity with the evolving landscape of generative AI, including popular foundation models and emerging agentic architectures,* Knowledge of common attacker methodologies targeting AI systems (e.g., prompt injection, data poisoning, inference, and extraction attacks),* Interest in contributing to a detection engineering team focused on safeguarding AI technologies,* Strong logical reasoning and problem-solving skills, especially in unfamiliar or complex scenarios,* Ability to communicate technical concepts clearly to both technical and non-technical stakeholders.**Benefits:*** 23 days’ holiday + all public holidays, rising to 25 days after 2 years of service,* Additional day off for your birthday,* Private medical insurance which covers you, your cohabiting partner and children,* Life insurance of 4 times your base salary,* Salary sacrifice pension scheme,* Enhanced family leave,* Confidential Employee Assistance Program,* Cycle to work scheme.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.