We’ve organized every stage and persona in the AI supply chain, informed by real recruiting at frontier companies. Click any row to see matching profiles from our talent graph.







Summary
Known as: Compiler Engineer, ML Framework Engineer, XLA/Triton Engineer, ML Compiler Engineer, MLIR Engineer, Kernel Engineer
Builds and maintains the frameworks, compilers, and runtime systems that researchers and engineers use to train and serve models. Owns the abstraction layer between model code and hardware: automatic differentiation, graph compilation, operator fusion, memory planning, and hardware-specific code generation.
Specializations
Where the Work Lives
Graph compilation, hardware-specific code generation, and the runtime that ties compilation to execution.
Builds the frameworks (PyTorch, JAX) and autograd engines researchers write code against.
Candidate Archetypes
Owns autograd internals, graph breaks, distributed primitives, and the programming model researchers actually live in.
Builds lowering pipelines, fusion passes, and hardware-specific code generation across MLIR/XLA/Triton-class stacks.
Owns scheduling, memory planning, multi-device orchestration, and the layer that makes compiled graphs actually run.
Company Scale
Framework companies (Meta, Google, NVIDIA), chip companies, and frontier labs. Others use as-is.
Featured Roles
If you’re hiring at the AI frontier, let’s talk.