An LLM, by itself, cannot be truly agentic. This is also true of swarms, teams, workflows, and other kinds of "multi-agent" systems. If an LLM is doing all of the driving, then you're dealing with something other than an agentic system. That's why, at Aampe, LLMs are an appendage, not a foundation.
LLMs excel at two types of "thinking":
Procedural memory lets us automate skills (think writing a sentence or riding a bike).
Working memory holds and juggles information in the moment (like keeping a phone number in mind).
Those two types of thinking are a big part of what defines human cognition, but but humans rely on other types of thinking as well:
Semantic memory stores abstract concepts (knowing what “sustainability” really means).
Associative learning links those concepts to outcomes (learning that stressing sustainability drives engagement).
Imagine someone who follows every cooking step flawlessly but has no idea how the dish tastes, whether it was liked, or how to improve it. Without semantic understanding or feedback associations, true adaptation—and true agency—can't happen.
Agency ≠ next-token prediction
A truly agentic system must decide *when* and *how* to act on its own—without waiting for step-by-step instructions - but LLMs only ever predict the next token given a prompt. They don’t decide to prompt themselves.
Real autonomy needs semantic/associative learning
An agent needs to consolidate experiences into categories of transferable meaning and tie those categories to outcomes (not just textual feedback). This is what allows an agent form opinions about which strategies to pursue or avoid over time.
Recent LLM advances like retrieval-augmented generation - where external information gets incorporated into the prompt - or session-summary "memory" features — where past interactions are summarized and re-fed as context — are clever procedural hacks, but they aren't semantic/associative memory. They help preserve surface continuity and (sometimes) reduce hallucinations. They don’t build conceptual understanding or learn from outcomes.
If you want an agent that notices a user ignored your “20% off” push yesterday, so today it tries a new message emphasizing how your return policy creates a risk-free shopping experience, an LLM can draft both of those messages — but it won’t autonomously pick the second one unless you explicitly prompt it to do so. There's no automatic adaptation, no evolving preferences. No agency.
Fully agentic systems require a hybrid architecture:
A semantic-associative learner that builds and updates long-term user profiles.
A procedural actor that generates fluent, on-brand content (or retrieves that content from a large pre-populated inventory).
An agent that can't truly choose its own next move is not an agent. It's just a novel interface for the same information retrieval, content management, and marketing automation systems that we've had for years.
Related
Shaping the future of marketing with Aampe through innovation, data.