Aug 9, 2024
Schaun Wheeler

Optimized Messaging with Learned Weights and Beta Distribution

Aug 9, 2024
Schaun Wheeler

Optimized Messaging with Learned Weights and Beta Distribution

Aug 9, 2024
Schaun Wheeler

Optimized Messaging with Learned Weights and Beta Distribution

Aug 9, 2024
Schaun Wheeler

Optimized Messaging with Learned Weights and Beta Distribution

AI agents and tagged weights

Agentic learners encode their learnings as tagged weights. At Aampe, for example, we start a new customer's agents with 35 timing tags: five three-hour increments (covering waking hours) over seven days of the week. Each agent updates the weights for each tag based on the way its assigned user responds to messages sent during each window.

Our agents also maintain tagged weights for channel, value proposition, and other copy choices, but timing actions present a unique challenge: each timing decision must consider the possibility of not taking any action at all. Friday night may be a better bet than Wednesday early afternoon, but Friday night might still be a bad bet in and of itself. The more we act on bad bets, the more we annoy users. Push notifications, emails, and other surfaces for user interaction are a gold mine of first-party data. Messaging too much dynamites the mine. No one wants that.

So, agents need a mechanism for making a go/no-go decision for every available timing slot. We use a random draw from a beta distribution to do this. The beta distribution takes an alpha and beta parameter, but there's an alternative parameterization that I find more intuitive: it takes a probability and a measure of signal strength.

The use of probability and signal strength in messaging decisions

Imagine we estimate that a user has a 75% probability of responding on Friday night. If that probability is based on having sent only one message, our probability warrants a lot less confidence than if it's based on 20 messages.

If we take the number of messages as our measure of signal strength, a 0.75 probability based on a signal strength of 1 would correspond to an alpha of 0.75 and a beta of 0.25. Draw 1000 samples from that distribution, and on average they'll approximate 0.75, but the (inter-quartile) range will run from around 0.57 to 0.99. On the other hand, if you have a signal of 20 (alpha=3.75, beta=1.25), those 1000 samples will still average 0.75, but the range will run from around 0.69 to 0.82.

So, when our agents decide whether to message at a particular time, they look up the probability (weight) and signal of that timing tag in question and use it to draw a sample from the corresponding beta distribution. If the draw is over a certain threshold (default: 0.5), the agent sends the message; otherwise, the agent skips it.

To give you some intuition around how this works, I've plotted out the distribution of weekly messaging frequency that will result from all 35 timing tags having a particular weight and signal strength. Learned weights and the beta distribution give Aampe agents a straightforward way to individually optimize timing and message frequency at the same time.

__wf_reserved_inherit
0

Related

Shaping the future of marketing with Aampe through innovation, data.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Renewals, holidays, and launches don’t need hardcoded rules. With reward signals, eligibility criteria, and timing action sets, agents adapt naturally to recurring patterns.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 21, 2025

Schaun Wheeler

By modeling statistical relationships between events, agents evaluate directional shifts in behavior—so the same system adapts across every lifecycle stage.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Aug 19, 2025

Schaun Wheeler

You don’t coach by chasing the trophy. You coach by tracking whether each play puts you in a stronger position. The same is true for customer engagement.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Jul 23, 2025

Schaun Wheeler

A/B tests help us see what works on average, but real users aren’t average, their motivations and contexts vary. That’s where agentic learning shines, adapting to individuals over time. The best results come when we layer the two: tests for clarity, agents for personalization.

Load More

Load More

Load More

Load More