Job Search and Career Advice Platform

Enable job alerts via email!

AI Security Engineer (Consultant)

Maple Logic Ltd

United Kingdom

On-site

GBP 60,000 - 80,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A tech solutions company in the United Kingdom is seeking an AI security engineer. The role involves working with product, security, and engineering teams to ensure AI safety and compliance. Responsibilities include threat modeling, designing security controls, and conducting AI security reviews. Candidates should have a background in application or cloud security and familiarity with AI systems. This position aims to balance innovation with security to help organizations leverage AI safely without hindering progress.

Qualifications

  • Background in application, cloud, or product security.
  • Hands-on familiarity with LLMs, agents, or related tooling.
  • Ability to design practical controls for teams.

Responsibilities

  • Work with product, security, and engineering teams on AI security.
  • Conduct threat modeling on AI workflows.
  • Design and test guardrails for AI applications.
  • Run AI-focused security reviews and red teaming exercises.

Skills

Application security
Cloud security
Product security
AI security awareness
Job description

We are looking for an AI security engineer to help our clients secure LLM systems, agents, and AI-powered products — from threat modelling and red teaming to designing practical guardrails and controls.

You will help organisations move faster with AI without taking unnecessary risks, combining a strong security mindset with a pragmatic, product-aware approach.

Role overview

You will work with product, security, and engineering teams to understand how AI is used in their organisation and design controls that keep systems safe, reliable, and compliant. Engagements may range from focused assessments of new LLM integrations to ongoing work shaping a client’s AI security strategy, standards, and review processes.

You will also bridge security concerns and product realities — helping teams determine where strong controls are essential, where lightweight mitigations are enough, and how to embed AI security into workflows without slowing progress.

What you might work on
  • Threat modelling LLM and agent workflows, including abuse cases and data leakage risks.
  • Designing and testing guardrails for prompt injection, data exfiltration, and unsafe actions.
  • Running AI-focused security reviews, red teaming exercises, and tabletop simulations.
  • Working with engineers to implement mitigations in code, infrastructure, and processes.
  • Helping teams establish guidelines and checklists for building AI features securely.
About You
  • Background in application, cloud, or product security, with an interest in AI systems.
  • Hands‑on familiarity with LLMs, agents, or related tooling — comfortable experimenting and reading docs or code.
  • Practical mindset: you enjoy designing controls that teams can actually adopt.
  • Clear communicator able to work with engineers and non‑technical stakeholders and explain risk without hype.
  • Keeps up with the evolving AI security landscape — new attacks, mitigations, and best practices.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.