Jun 5, 2025
Schaun Wheeler

Agentic Architecture in Action: Aampe's Dual-Path Personalization Strategy

Jun 5, 2025
Schaun Wheeler

Agentic Architecture in Action: Aampe's Dual-Path Personalization Strategy

Jun 5, 2025
Schaun Wheeler

Agentic Architecture in Action: Aampe's Dual-Path Personalization Strategy

Jun 5, 2025
Schaun Wheeler

Agentic Architecture in Action: Aampe's Dual-Path Personalization Strategy

Here is a diagram of our agentic architecture (well, part of it). See the top-right box: "recommender service"? Let’s talk about that. At Aampe, we split copy personalization into two distinct decisions:

  • Which item to recommend

  • How to compose the message that delivers it

Each calls for a different approach.

For item recommendations, we use classical recommender systems: collaborative filtering, content-based ranking, etc. These are built to handle high-cardinality action spaces — often tens or hundreds of thousands of items — by leveraging global similarity structures among users and items.

For message personalization, we take a different route. Each user has a dedicated semantic-associative agent that composes messages modularly — choosing tone, value proposition, incentive type, product category, and call to action. These decisions use a variant of Thompson sampling, with beta distributions derived from each user’s response history.

Why split the system this way? Sometimes you want to send content without recommending an item — having two separate processes makes that easier. But there are deeper reasons why recommender systems suit item selection and reinforcement learning suits copy composition:


  1. Cardinality

    The item space is vast — trial-and-error is inefficient. Recommenders generalize across users/items. Copy has a smaller, more personal space where direct exploration works well.


  2. Objectives

    Item recommendations aim at discovery — surfacing new or long-tail content. Copy is about resonance — hitting the right tone based on past response.


  3. Decision structure

    Item selection is often a single decision. Copy is modular — interdependent parts that must cohere. Perfect for RL over structured actions.


  4. Hidden dimensions

    Item preferences stem from stable traits like taste or relevance. Copy preferences shift quickly and depend on context — ideal for RL’s recency-weighted learning.


  5. Reward density

    Item responses are sparse. Every content delivery yields feedback — dense enough to train RL agents, if interpreted correctly.

In short: recommenders find cross-user/item patterns in large spaces. RL adapts to each user in real time over structured choices. Aampe uses both — each matched to the decision it’s best for.

0

Related

Shaping the future of marketing with Aampe through innovation, data.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Load More

Load More

Load More

Load More