User suggests implementing a feature that allows for better handling of catch-all email segments during A/B testing to improve deliverability and measurement accuracy.
I run outbound like a growth experiment, but the results were too noisy to learn anything. We ran an A/B test across two angles and two audiences. Everything looked random. Week one one variant wins, week two it flips. Reply rates bounce around. The temptation is to keep rewriting copy. The issue was deliverability drift. Bounce rate started trending up and inbox placement became less stable. The experiment was not measuring copy. It was measuring who got delivered. So I added a control layer: * verify every batch before uploading * do not reuse lists older than 30 days * separate catch alls into a separate segment * send catch all segments at lower volume * track bounce rate per segment, not overall Recent batch: * 2,400 leads * non catch all segment bounce around 0.8% * catch all segment bounce around 3.1% * once segmented, reply rate differences became easier to interpret Validator test: Emailawesome is currently winning for validation only because the catch all handling is more usable for segmentation and policy. Question: if you treat outbound as a growth system, what controls do you use so tests measure what you think they measure? The problem I am solving is catch all efficiency, preserving deliverable volume while minimizing wasted sends that distort experiments.