Private Draft

The 29 personas behind AI

We’ve organized every stage and persona in the AI supply chain, informed by real recruiting at frontier companies. Click any row to see matching profiles from our talent graph.

Shaped by Industry Experts
Kumar Chellapilla
Kumar ChellapillaVPE
Jennifer Anderson
Jennifer AndersonVPE / Stanford PhD
Thuan Pham
Thuan PhamCTO
Akash Garg
Akash GargCTO
Linghao Zhang
Linghao ZhangResearch Engineer
Wayne Chang
Wayne ChangEarly FB Engineer
Indrajit Khare
Indrajit KhareEM & Head of Product
← ATOMS & ENERGYUSERS & MARKETS →
← Back

Decision Science

Measures what matters in prod
Decision Science

Known as: Data Scientist, Analytics Engineer, Product Analyst

Product-side measurement for AI systems where the standard analytics playbook breaks. Models change under you (non-stationarity), product and model co-evolve (feedback loops), and the evaluation surface is qualitative — LLM output quality can't be reduced to click-through rates. Turns product goals into quantitative frameworks, runs experiments, and reads production telemetry so teams can iterate toward user outcomes and business impact.

Specializations

Eval Design & Product Measurement Turns product requirements into rubrics and automated metrics that guide iteration. For generative AI products, this means measuring output quality where there's no single right answer: rubric design for LLM outputs, human preference collection at product scale, proxy metric validation against real user satisfaction, and statistical frameworks to detect regressions across releases. Different from model capability evals — optimizes for user outcomes, not benchmark scores.
Experimentation & Causal Inference Runs A/B tests and observational studies to measure the impact of model changes, prompts, and features on user outcomes. Designs experiments that handle AI-specific wrinkles: non-stationarity, feedback loops, long-tail failures, and changes happening simultaneously across model, prompt, and product surface. Attributes outcomes to specific interventions.
Production Analytics Reads production signals (usage patterns, failure modes, drift, feedback loops) and turns them into actionable recommendations. Decides what to measure and what it means.

Exists because AI products break the standard analytics playbook — outputs are stochastic, user expectations shift as capability grows, and the feedback loop between model behavior and user behavior creates measurement challenges that don't exist in traditional product analytics.

[1]Substrate
[2]Compute
[3]Intelligence
[4]Systems
Secondary

Reads production telemetry, detects drift, and instruments experiments that span model and product changes.

[5]Distribution
Primary

Measures product outcomes and user behavior to guide iteration on AI-powered features.

Zak Elwood
Zak Elwood
Airbnb
Product measurement

Turns product intent into rubrics and metrics tied to user outcomes.

Lillian Wilkinson
Lillian Wilkinson
Stripe
Experimentation & causal inference

Runs A/B and quasi-experimental inference under non-stationarity and feedback loops.

Sandra Masha
Sandra Masha
Meta
Production diagnostician

Builds failure taxonomies from telemetry and converts them into prioritized engineering work.

Early-Stage
Occasional
Growth
Primary
Enterprise
Primary

Ubiquitous once there's product usage to measure. Bundled into eng early; growth+ dedicates.

Let’s Find Your Next Builder

If you’re hiring at the AI frontier, let’s talk.