A/B testing in the Workbench
Use A/B testing to compare variations of the same Action Flow experience and ship the option that performs best. This page focuses on how to run an experiment entirely from the Workbench.
What you need before you start
- Access to the Workbench with permission to edit and publish the Action Flow you want to experiment on.
- A draft Action Flow (or a copy of an existing one) that contains the card or step you want to test.
- A clear success metric, captured with a Track a goal step (for example, form completion, click-through, or an API response).
- A small set of test customers to validate the experiment before sending to live customers.
Create an experiment with branches
- In the Workbench, open the Action Flow you want to test and create or edit a draft.
- Insert a Branch flow step where you want traffic to split. Place it just after the trigger if you need to test alternative first touch experiences, or after a specific step if you are testing a single card.
- Add two (or more) branches and label them clearly (for example, Variant A and Variant B). Use the branch configuration to assign the percentage of customers who should enter each branch (for example, 50/50 to start, or 10/90 for a slow rollout).
- In each branch, add the steps that make that variant unique. Common patterns include:
- A different Send a card step with alternative copy, imagery, or CTA.
- A different Send a request step to trial a new backend call or parameter.
- After the branches rejoin, add (or keep) a Track a goal step that represents the outcome you are measuring. This ensures both variants are measured against the same success definition.
- Save your draft and run a quick test using test customers to confirm both branches render correctly before publishing.

Roll out and control traffic
- Start cautiously by allocating a small percentage to the new experience (for example, 10% to Variant B and 90% to Variant A) and increase the share as confidence grows.
- To stop sending a variant, set its branch percentage to 0% (or remove the branch) and republish. To ship the winner, set that branch to 100% and clean up the experiment when you no longer need it.
- Use participation modes to decide whether customers can re-enter the flow during the experiment.
Monitor results in the Workbench
- Overview tab: Track how many customers entered each branch and compare the goal completions and conversion rate for each variant.
- Activity tab: Inspect recent runs to verify that customers are being assigned to the expected branch and that goal events are firing.
- Analytics exports and debugger: Use the Analytics Debugger or your downstream analytics to break down metrics by branch name and goal outcome.
Good practices
- Keep variants narrowly focused so you can attribute performance differences to a single change.
- Align on a sample size and evaluation window before declaring a winner to avoid noisy reads.
- When experimenting on critical journeys, include a Stop flow fallback branch to safely halt execution if you detect an issue.