One of the most important components of an agentic system is its abstraction layer — a framework that organizes surface-level details into higher-level, strategic categories.
Abstraction enables transfer learning. It allows your system to take insights from one context and apply them to another. That's essential to agentic learning (and, therefore, is core to how Aampe functions).
Take a culinary analogy: you might know recipes for Chicken Tikka Masala, Chicken Caesar Salad, and Chicken & Waffles. But just because they all contain chicken doesn't mean it's a good idea to mash them all together. Real culinary skill comes from understanding why recipes work — the role of acids, fats, heat, salt, aromatics, binders, and so on. These are semantic categories that generalize across dishes.
Same goes for customer engagement. Non-agentic systems test surface elements — entire messages, subject lines, and so on. That leaves you with insights that don’t transfer: a successful A/B test of two messages just means you have two messages that many of your users have already seen. It doesn't tell you much about what you should send next, and even less about what you should send after that.
An agentic system, by contrast, understands and manipulates the communication categories: value propositions, tones of voice, product types, incentive levels, calls to action. These semantic categories are the embodiment of your strategy. If you get them right, you can create any number of tactical variants without repeating yourself — and without losing your strategic thread.
The point of abstraction isn't to generalize everything; it's to generalize what matters. It requires fuzziness. Fuzzy categories don’t map cleanly to one piece of content, and that's exactly what makes them reusable, adaptable, and meaningful.
Messages, subject lines, items, teams, songs, movies — those are lexical categories, not semantic categories. If you want to bet on a lexical category, spin up a recommender system. If you want to bet on a semantic category - and the creativity within that category that semantics enables - that's where reinforcement learning shines.
If your system doesn’t have an abstraction layer of semantic categories, it’s not agentic. Or, to be more precise: it's incapable of learning agentically.