May 12, 2025

Schaun Wheeler

Having Agency vs Acting Agentically

May 12, 2025

Schaun Wheeler

Having Agency vs Acting Agentically

May 12, 2025

Schaun Wheeler

Having Agency vs Acting Agentically

May 12, 2025

Schaun Wheeler

Having Agency vs Acting Agentically

We throw around terms like "agency" when discussing AI, but the term itself lacks definition. That's a problem that predates AI. The idea of "agency" raises a lot of questions that generations of philosophers haven't been able to answer:

❌ Must agency involve conscious goals or do instincts count?
❌ Can purely mental acts be a manifestation of agency?
❌ Is agency about causes or reasons?
❌ Is momentary action the same as long-term planning?
❌ Does coercion eliminate agency?

That's why, as far as AI is concerned, I recommend ignoring the problem instead of solving it. I'm serious. Here’s why:

A thing does not necessarily need to "have agency" in order to "act agentically."

In other words: instead of building machines that truly think for themselves, build machines whose actions are reliably difficult to distinguish from the actions of beings who think for themselves.

The ability to act agentically is just the condition of reasonably consistently passing a generalization of the Turing Test, which was originally associated specifically with language-based intelligence, but can be easily extended. It's not about getting machines to "think". It's about getting machines to fake it well enough that it feels like they're thinking. We don’t need machines to possess intentions — we need them to navigate situations that make intention look necessary.

In predictable, stable environments with clear feedback and repeatable rules, even LLMs — armed with nothing but procedural and working memory — can appear agentic. Within that limited scope, LLMs can give the impression of knowing what they're doing.

But in environments where signals are delayed, feedback is ambiguous, goals conflict and change, and actions must generalize across shifting contexts, procedural mimicry fails the generalized Turing Test. Without semantic memory to build abstractions, and without associative learning to tie abstractions to expected outcomes (and therefore to next-action selection), there's no meaningful way to adapt to the environment with reasonably consistent success over an extended period of time.

As the need for sustained behavioral coherence stretches into hours, day, or weeks rather than minutes, acting agentically is less and less a matter of how well a machine follow instructions, and more and more a matter of how it decides what to do next, and why, and how it feeds the lessons from that decision forward into subsequent decisions. To act agentically over an extended period of time in an unstable information environment, AI requires semantic-associative learning. Procedural memory just isn't enough.

We don't need to solve agency, but we do need a performance definition of what passing for agentic behavior looks like under increasing environmental complexity. Current LLM benchmarkets either don't measure that kind of performance, or do measure it and show that LLMs consistently fall short.

0

Related

Shaping the future of marketing with Aampe through innovation, data.

May 14, 2025

Schaun Wheeler

LLMs aren't always the answer for customer messaging. Most businesses need semantic-associative learning — connecting message traits to outcomes — not real-time text generation.

May 14, 2025

Schaun Wheeler

LLMs aren't always the answer for customer messaging. Most businesses need semantic-associative learning — connecting message traits to outcomes — not real-time text generation.

May 14, 2025

Schaun Wheeler

LLMs aren't always the answer for customer messaging. Most businesses need semantic-associative learning — connecting message traits to outcomes — not real-time text generation.

May 14, 2025

Schaun Wheeler

LLMs aren't always the answer for customer messaging. Most businesses need semantic-associative learning — connecting message traits to outcomes — not real-time text generation.

Apr 30, 2025

Schaun Wheeler

Agentic architectures don’t rely on fixed explore/exploit ratios — the balance emerges from the system itself.

Apr 30, 2025

Schaun Wheeler

Agentic architectures don’t rely on fixed explore/exploit ratios — the balance emerges from the system itself.

Apr 30, 2025

Schaun Wheeler

Agentic architectures don’t rely on fixed explore/exploit ratios — the balance emerges from the system itself.

Apr 30, 2025

Schaun Wheeler

Agentic architectures don’t rely on fixed explore/exploit ratios — the balance emerges from the system itself.

Apr 28, 2025

Schaun Wheeler

Why agentic systems require structures for semantic-associative memory, and why LLMs lack the architecture to do anything but procedural memory.

Apr 28, 2025

Schaun Wheeler

Why agentic systems require structures for semantic-associative memory, and why LLMs lack the architecture to do anything but procedural memory.

Apr 28, 2025

Schaun Wheeler

Why agentic systems require structures for semantic-associative memory, and why LLMs lack the architecture to do anything but procedural memory.

Apr 28, 2025

Schaun Wheeler

Why agentic systems require structures for semantic-associative memory, and why LLMs lack the architecture to do anything but procedural memory.

Apr 24, 2025

Schaun Wheeler

Agentic systems aren’t just smarter versions of traditional automation — they require a fundamentally different architecture.

Apr 24, 2025

Schaun Wheeler

Agentic systems aren’t just smarter versions of traditional automation — they require a fundamentally different architecture.

Apr 24, 2025

Schaun Wheeler

Agentic systems aren’t just smarter versions of traditional automation — they require a fundamentally different architecture.

Apr 24, 2025

Schaun Wheeler

Agentic systems aren’t just smarter versions of traditional automation — they require a fundamentally different architecture.

Load More

Load More

Load More

Load More