Skip to main content

AI systems are not just tools.
They are attack surfaces.

Strategic · Independent · Resilient

Most organisations are adopting AI faster than their security programmes can follow. ContrailRisks helps you understand what you have deployed, where the risks are, and how to govern and secure it — before regulators or attackers force the issue.

All Services

AI Security

Security and governance for AI systems — built for a field still being defined

AI security sits at an uncomfortable intersection: part traditional cybersecurity, part adversarial machine learning, part regulatory compliance, part emerging risk. The attack surface of an AI system spans its training data, its model weights, its inference pipeline, its integrations, and every prompt that flows through it. Most organisations deploying AI today have assessed none of these. Meanwhile, the EU AI Act, DORA's AI provisions, and ISO 42001 are creating legal obligations that attach to systems already in production. We offer three structured engagements that together cover the full picture — from baseline assessment through governance programme to technical architecture validation. Each is fixed in scope and produces a concrete deliverable, not a slide deck.

01

AI Security Readiness Assessment

A structured, time-boxed engagement — typically two to four weeks — that produces a baseline picture of your AI security posture. We catalogue every AI system in your environment, including third-party models, embedded AI features in SaaS tools, and internally built applications. Each system is threat-modelled using the ML life cycle (data → training → inference → monitoring) as the attack surface framework, and assessed against the NIST AI Risk Management Framework and ISO 42001. We classify each system against the EU AI Act risk tiers and produce a prioritised remediation roadmap. Deliverables: AI Security Baseline Report, AI Risk Register, Regulatory Exposure Map.

02

AI Governance Programme

A full advisory engagement to design, implement, and operationalise an AI governance programme aligned with your regulatory obligations and risk appetite. We assess your current state against ISO 42001 and the NIST AI RMF, classify your AI systems under the EU AI Act, and develop the policies, procedures, and governance structures required to operate responsibly and demonstrate compliance. This includes AI use policy, model risk management processes, third-party AI vendor oversight, RACI and committee structures, and an incident response process for AI-related events. Deliverables: AI Governance Framework, Policy Library, Roles & Responsibilities Structure, Compliance Evidence Pack.

03

AI Security Architecture Review

A targeted technical review of how your AI systems are built, deployed, and integrated. We map the architecture — model hosting, API exposure, data pipelines, retrieval-augmented generation (RAG) components, agent tool integrations, and output handling — and assess each layer against security best practices. Threat modelling is applied per component, and we evaluate authentication, authorisation, input validation, output filtering, and logging and monitoring controls. The review identifies gaps and produces a hardening playbook with prioritised recommendations. Deliverables: Architecture Review Report with annotated diagrams, Finding Register, Hardening Playbook.

04

AI Threat Modelling

Structured threat modelling for AI systems, applied as a standalone engagement or as part of the Readiness Assessment. We use the ML life cycle attack surface framework in combination with MITRE ATLAS — the adversarial threat landscape for AI systems — to identify where your models and AI infrastructure are exposed. Attack categories include data poisoning at the training phase, adversarial evasion and model inversion at inference, prompt injection (direct and indirect), model extraction, and supply chain compromise. The output is a prioritised threat model with control recommendations mapped to each identified risk.

05

EU AI Act Risk Classification

If you operate or develop AI systems that touch EU users or EU-regulated entities, the AI Act creates obligations based on how your systems are classified. We conduct a structured classification exercise across your AI portfolio — applying the Act's four-tier risk framework (unacceptable / high-risk / limited / minimal) — and produce a documented rationale for each classification. High-risk systems trigger mandatory requirements: conformity assessments, technical documentation, human oversight mechanisms, and registration in the EU database. We map your obligations and produce an implementation roadmap.

06

ISO 42001 Implementation

ISO 42001 is the international standard for AI management systems. It provides the governance backbone for organisations that develop, deploy, or operate AI — covering risk management, transparency, accountability, and ongoing monitoring. We guide you through gap assessment, design the management system appropriate to your organisation's scale and AI footprint, develop the required policies and controls, and prepare you for certification audit. ISO 42001 also serves as the governance layer that satisfies EU AI Act documentation requirements and demonstrates due diligence to enterprise customers and regulators.

Frameworks & Standards

NIST AI RMFISO 42001EU AI ActMITRE ATLASOWASP LLM Top 10ENISA AI Threat LandscapeISO 27001DORA (AI Provisions)NIST CSF

Ready to secure your AI systems?

The Readiness Assessment is the right starting point for most organisations — a fixed scope, a clear deliverable, and a prioritised roadmap. We can talk through what makes sense for your situation.