Users struggle to apply the Expected Value of Information concept because they lack guidance on defining inputs like the cost of delay, the value they would pay for information, and realistic uplift ranges. A feature within A/B testing tools that helps users estimate or define these parameters would make VoI calculations more practical and actionable.
Surely some things are not worth A/B testing... right?? I hear this all the time: "it's not worth A/B testing a bug fix, or some idea we're super confident in, or some tiiiny tweak we're making." Well, guess what? You can determine that empirically! Don't just stand there shrugging your shoulders at me all "idk man" If you can: - define the stakes (baseline metrics and impact), and - define a plausible range of possible outcomes (a prior), you can put a specific dollar value on A/B testing whatever bug fix or sure bet or tweak you're trying to weasel your way into just shipping. This is the Expected Value of Information. Calculate it, and you'll know "if you can test it for less than $X,XXX... you should test it." In my experience, "test everything" skeptics chronically underestimate this number, sometimes by an order of magnitude. So I built an easy-to-use calculator. Even has a PNG export button at the end so you can Slack your coworkers the outcomes and settle arguments with MATH. https://lnkd.in/dMAzEpTT (Feedback very much welcome!)