Jun 2, 2025
Schaun Wheeler

Reimagining Customer Engagement: Beyond Hierarchical Reinforcement Learning

Jun 2, 2025
Schaun Wheeler

Reimagining Customer Engagement: Beyond Hierarchical Reinforcement Learning

Jun 2, 2025
Schaun Wheeler

Reimagining Customer Engagement: Beyond Hierarchical Reinforcement Learning

Jun 2, 2025
Schaun Wheeler

Reimagining Customer Engagement: Beyond Hierarchical Reinforcement Learning

Following up on one of my recent posts about using bandits as the anchor point for how we think about agentic learning:

Yes, we’ve stretched the bandit framing well beyond its usual territory — multi-dimensional action spaces, non-ergodic structure, per-user learning - but it’s still a better conceptual fit than alternatives like hierarchical RL.

Customer engagement isn’t about planning journeys. It's about reacting to the present — selecting the best next action based on what we currently know, not on where we imagined the user might be at this point in a pre-designed arc.

Hierarchical RL assumes temporal structure: first A, then B, then C. You learn sub-policies that unfold over time. That works for robotics, navigation, task decomposition. It doesn’t match the shape of customer behavior, which is noisy, nonlinear, and context-sensitive.

What we’re doing instead is selecting, in parallel, the best options from a bunch of overlapping action sets — tone, CTA, incentive, channel, timing — to form a single composite action to take *right now*. Then we see what happens and update. No storyline, no path planning. Just a series of bets, each one grounded in local context.

So yes, we’ve moved pretty far from textbook bandits. But I'd say we’re even farther from hierarchical RL. And more importantly, this framing reflects how we think engagement really works: not as a narrative, but as a stream of decisions under uncertainty.

0

Related

Shaping the future of marketing with Aampe through innovation, data.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Load More

Load More

Load More

Load More