Skip to content

Agentic Engineering

Andrej Karpathy coined the term in early February 2026, deliberately positioning it against “vibe coding.” His definition: “‘agentic’ because the new default is that you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight — ‘engineering’ to emphasise that there is an art & science and expertise to it.”

The category is converging fast. Addy Osmani, Simon Willison, and O’Reilly are all publishing on it. The core definition is consistent: the human acts as architect and reviewer, not author. The agent executes, tests, and iterates. The human provides intent, quality gates, and architecture.

Agentic engineering has a problem that nobody has named cleanly: agents don’t persist context between sessions.

Every time you open Claude Code, Cursor, or Copilot, the agent starts cold. It doesn’t know your architecture, your layer boundaries, or the capabilities you’ve wired. It defaults to its training data — which means code that compiles, passes review, and violates your architecture.

You correct it. Every session. Or you don’t, and the violations accumulate.

At the individual level this is friction. At the team level it compounds. O’Reilly’s analysis of the shift from conductors (one agent, synchronous) to orchestrators (multiple agents, parallel) makes the stakes explicit: multi-agent orchestration requires “shared context, memory, and smooth transitions” across agents — without it, each agent becomes an isolated silo. The architecture problem doesn’t shrink as you add more agents. It scales with them.

Osmani makes a related point about who benefits most from agentic engineering: senior engineers — because they already understand the architecture deeply. Junior developers risk producing code they don’t understand. The implication is that agentic engineering amplifies existing architectural knowledge. If that knowledge isn’t encoded somewhere the agent can read, it doesn’t get amplified — it gets lost.

The discipline of agentic engineering addresses how you work with agents. It doesn’t address what agents know about your specific codebase. That’s the gap verikt fills.

Karpathy defined the discipline. verikt gives it infrastructure.

The emerging term for this infrastructure is context engineeringAnthropic defines it as “the set of strategies for curating and maintaining the optimal set of tokens during LLM inference.” Martin Fowler’s site puts it more plainly: “curating what the model sees so that you get a better result.”

verikt is context engineering for software architecture. verikt guide generates architecture context files from your verikt.yaml — one command that feeds your exact layer boundaries, capability stack, and dependency rules into every AI agent you use. Not training data patterns. Your patterns.

This is the missing layer. Agentic engineering without architecture context means the rigor you apply at the design stage doesn’t survive into the agent session. With it, every session starts from the same architectural ground truth.

Three commands cover the full loop:

Terminal window
verikt guide # feed your architecture to every agent
verikt check # enforce it in CI
verikt new # scaffold new services with the right structure from the start

From EXP-01 and EXP-02 — live Claude API calls, same prompt, same model, only guide presence varies:

  • 7 → 1 violations with guide, flat → hexagonal architecture (EXP-03, greenfield)
  • [0,0,0] violations with guide vs [2,1,1] without across 3 runs — 22.2× variance reduction (EXP-02)
  • Guide served from prompt cache — -23% output tokens, zero fresh token overhead

Same task, different stability. Without the guide, output varies run to run. With it, zero violations every time. The agent doesn’t need to be smarter. It needs to know your architecture before it writes the first line.

These are the people who defined the category. verikt builds on their framing, not against it:


Context Engineering — the infrastructure layer agentic engineering requires → Why verikt exists — the beliefs behind the tool → For Engineers — the workflow in practice