Private Draft

The 29 personas behind AI

We’ve organized every stage and persona in the AI supply chain, informed by real recruiting at frontier companies. Click any row to see matching profiles from our talent graph.

Shaped by Industry Experts
Kumar Chellapilla
Kumar ChellapillaVPE
Jennifer Anderson
Jennifer AndersonVPE / Stanford PhD
Thuan Pham
Thuan PhamCTO
Akash Garg
Akash GargCTO
Linghao Zhang
Linghao ZhangResearch Engineer
Wayne Chang
Wayne ChangEarly FB Engineer
Indrajit Khare
Indrajit KhareEM & Head of Product
← ATOMS & ENERGYUSERS & MARKETS →
← Back

AI Security Engineering

Defends models in production
AI Security Engineering

Known as: Security Engineer (AI), AI Security Engineer, Application Security Engineer (ML), ML Security Engineer

Operationalizes defenses for AI systems in production: prompt injection hardening, model supply-chain security, API abuse detection and rate limiting, and the runtime security layer that protects deployed models from adversarial attack.

Specializations

Prompt Injection & Model Defense Hardening deployed models against prompt injection, jailbreaks, and adversarial inputs in production. Builds runtime detection, input sanitization, output filtering, and defense-in-depth layers that make attacks fail reliably. Turns attack research and red team findings into production defenses.
Supply Chain Security Security for model weights, dependencies, and the ML supply chain. Model provenance verification, weight integrity, dependency scanning, container security for serving infrastructure, and secrets management for API keys and model access tokens. Prevents tampering, poisoning, and unauthorized access across the model lifecycle.
API Abuse Detection & Rate Limiting Detects and mitigates abuse patterns on model-serving APIs: automated scraping, credential stuffing, prompt extraction attacks, denial-of-service via expensive queries, and usage-policy violations at scale. Builds rate limiting, anomaly detection, and abuse-response automation.
Agent & Tool Security Security for agentic systems: tool-use permission models, credential scoping and least-privilege enforcement, sandbox escape prevention, and the runtime isolation that keeps agents from exceeding their authorized capabilities. As agents gain access to databases, APIs, code execution, and file systems, the attack surface expands beyond prompt injection into traditional application security territory — but with non-deterministic execution paths.

Offensive vs. defensive split: adversarial research and attack discovery feed into production hardening. In many orgs the same security team does both; at frontier labs they're separate hiring profiles.

[1]Substrate
[2]Compute
Secondary

Infrastructure security for serving systems, API abuse detection, and rate limiting.

[3]Intelligence
Secondary

Secures model weights, prevents adversarial attacks, and protects model integrity.

[4]Systems
Primary

Operationalizes defenses: prompt injection hardening, supply-chain security, and runtime protection.

[5]Distribution
Jon Richards
Jon Richards
Anthropic
Prompt defense

Ships defense-in-depth layers that make jailbreak classes fail reliably in production.

Kenneth Cari
Kenneth Cari
OpenAI
Supply chain security

Protects weights, dependencies, secrets, and build pipelines from tampering and exfiltration.

Rupert Lee
Rupert Lee
Google
API abuse detection

Detects scraping, denial-of-service via expensive queries, and policy abuse at scale.

Early-Stage
Rare
Growth
Common
Enterprise
Primary

Frontier labs and API providers with adversarial traffic. Smaller orgs bundle into security.

Let’s Find Your Next Builder

If you’re hiring at the AI frontier, let’s talk.