Today, artificial intelligence finds itself in a familiar yet challenging position. Developers and product builders manually write prompts as inputs to large language models. Crafting these prompts has become a specialised skill - prompt engineering - and an even deeper specialisation, context engineering, which involves meticulously shaping the input context to guide AI workflows or agents.
However, this approach is inherently brittle. Every model interprets prompts differently due to its internal structure, circuits, and learned associations. Prompts finely tuned for one model might fail or yield subpar results when fed to another. Consequently, prompt engineering often resembles alchemy more than engineering: relying on trial, error, intuition, and luck rather than precise, repeatable processes.
The most advanced teams are addressing this issue by evaluating their prompts using evaluation datasets with LLM-as-judge, and refining as they progress. While commendable and a step closer to genuine engineering, it still echoes the early computing days when programmers struggled with punch cards or manually writing assembly code. Just as early computing moved past punch cards into languages like C, AI needs a similar transition - a moment that provides a higher-level abstraction.
Enter the AI Compiler.
Much like traditional compilers transformed the computing landscape by converting high-level instructions into machine-readable code, the AI compiler will revolutionise AI context engineering. However, unlike traditional compilers - which translate fixed syntax into deterministic, predictable code - the AI compiler will function dynamically. It will continuously generate, optimise, and adjust the input context fed to LLMs, aligning with the peculiarities and strengths of each model.
The AI compiler will accept broader, goal-oriented inputs: clear objectives, comprehensive guidelines, specific examples, labeled data, and contextual details. Instead of relying on fixed syntax, it will employ a continuously adaptive approach. It shifts the burden from human intuition to automated, data-driven optimisation.
In practical terms, users will no longer manually tweak prompts for each scenario or model. The AI compiler will manage that complexity, abstracting away the tedious and fragile nature of manual prompt tuning. Just as programmers today seldom worry about assembly language, future AI users won’t need to fuss about context crafting specifics. They’ll instead define clear, high-level intentions and allow the compiler to handle the intricate details beneath.
The question is - who’s building this?