We throw around terms like "agency" when discussing AI, but the term itself lacks definition. That's a problem that predates AI. The idea of "agency" raises a lot of questions that generations of philosophers haven't been able to answer:
❌ Must agency involve conscious goals or do instincts count?
❌ Can purely mental acts be a manifestation of agency?
❌ Is agency about causes or reasons?
❌ Is momentary action the same as long-term planning?
❌ Does coercion eliminate agency?
That's why, as far as AI is concerned, I recommend ignoring the problem instead of solving it. I'm serious. Here’s why:
A thing does not necessarily need to "have agency" in order to "act agentically."
In other words: instead of building machines that truly think for themselves, build machines whose actions are reliably difficult to distinguish from the actions of beings who think for themselves.
The ability to act agentically is just the condition of reasonably consistently passing a generalization of the Turing Test, which was originally associated specifically with language-based intelligence, but can be easily extended. It's not about getting machines to "think". It's about getting machines to fake it well enough that it feels like they're thinking. We don’t need machines to possess intentions — we need them to navigate situations that make intention look necessary.
In predictable, stable environments with clear feedback and repeatable rules, even LLMs — armed with nothing but procedural and working memory — can appear agentic. Within that limited scope, LLMs can give the impression of knowing what they're doing.
But in environments where signals are delayed, feedback is ambiguous, goals conflict and change, and actions must generalize across shifting contexts, procedural mimicry fails the generalized Turing Test. Without semantic memory to build abstractions, and without associative learning to tie abstractions to expected outcomes (and therefore to next-action selection), there's no meaningful way to adapt to the environment with reasonably consistent success over an extended period of time.
As the need for sustained behavioral coherence stretches into hours, day, or weeks rather than minutes, acting agentically is less and less a matter of how well a machine follow instructions, and more and more a matter of how it decides what to do next, and why, and how it feeds the lessons from that decision forward into subsequent decisions. To act agentically over an extended period of time in an unstable information environment, AI requires semantic-associative learning. Procedural memory just isn't enough.
We don't need to solve agency, but we do need a performance definition of what passing for agentic behavior looks like under increasing environmental complexity. Current LLM benchmarkets either don't measure that kind of performance, or do measure it and show that LLMs consistently fall short.
Related
Shaping the future of marketing with Aampe through innovation, data.