Following up on one of my recent posts about using bandits as the anchor point for how we think about agentic learning:
Yes, we’ve stretched the bandit framing well beyond its usual territory — multi-dimensional action spaces, non-ergodic structure, per-user learning - but it’s still a better conceptual fit than alternatives like hierarchical RL.
Customer engagement isn’t about planning journeys. It's about reacting to the present — selecting the best next action based on what we currently know, not on where we imagined the user might be at this point in a pre-designed arc.
Hierarchical RL assumes temporal structure: first A, then B, then C. You learn sub-policies that unfold over time. That works for robotics, navigation, task decomposition. It doesn’t match the shape of customer behavior, which is noisy, nonlinear, and context-sensitive.
What we’re doing instead is selecting, in parallel, the best options from a bunch of overlapping action sets — tone, CTA, incentive, channel, timing — to form a single composite action to take *right now*. Then we see what happens and update. No storyline, no path planning. Just a series of bets, each one grounded in local context.
So yes, we’ve moved pretty far from textbook bandits. But I'd say we’re even farther from hierarchical RL. And more importantly, this framing reflects how we think engagement really works: not as a narrative, but as a stream of decisions under uncertainty.