Data driven decision making is such a loaded term in business. Volleyed around by people who (not pointing fingers, but statistically speaking) could be part of the more than 75% of the working population who don’t do Mathematics more complicated than basic fractions and percentages.
It’s become such a buzzword the past few years, that it’s almost a cringe worthy joke in some professional circles. I won’t even dive into the fact that it’s not a new phenomenon – you’ve been making decisions based on data (hence, data driven decisions) since you were a child. A rudimentary example, but one nonetheless, your favorite color as a toddler was the result of some data crunching based on associations with toys, foods, books, and visuals that you found most pleasing.
Why then, are professionals constantly evangelizing that they are “data driven decision makers”? The answer is because the term makes them sound quantitative in a world of (as evidenced by the above stat) qualitative decision makers and personalities. As a growth centric Product Strategist, I’ve been most often handed the sash “Data Driven Decision Maker” – which to me is common sense. I can, however, appreciate that my approach to using Customer Driven insight to promote and float growth is one of my huge value adds to a project.
My path to develop this skill has been paved with some hard conversations with some pretty important people with some pretty impressive titles. Today’s post is to share with you one of my A/B testing challenge spots and help advise you on how to steer out of tough discussions with Entrepreneurs and C-level folks who heard about A/B testing at a conference and are hellbent on leveraging it to “make the right decision”.
My engagement was to work on a pricing page for an Ecomm business services product to make it more self-service and channel traffic towards what was a pretty obvious sweet spot of features for the price point. The C level executive in charge of the product line was adamant about using A/B testing…which I was excited about – as we seemed to see eye to eye on the toolset we wanted to use.
I had, in my mind, determined the A variable and the B variable I wanted to test first. Had my plan for controls regarding time, business hours, traffic allocations, split scenarios…the works. It was then that the C-level professional dropped the bomb of all bombs to a growth hacker……this person wanted to test a control which was nothing changed as the A variable and the B variable had about 6 concrete changes.
It was at this moment that I knew I had to share my concerns about this not being a true A/B test but instead an A/B1B2B3B4B5B6 test….in which the exact B variable that was our positive motivator or detriment couldn’t be determined because new variable introductions muddied the water. So try I might….I tried to make my hesitations known. It was only after numerous exercises with A and B options being shown to segments did this otherwise brilliant professional come to understand the TRUE best case implementation for an A/B test.
Sometimes you have to let people Go through it to Grow through it…..like I always say.
Budget was lost, of course, but the exercise showed the C-suite what it meant to actually dedicate a team to rigorous experimentation not just A/B testing for the sake of checking it off as something we do to achieve results. It’s a pretty normal pitfall for companies to fall victim to – an Exec is looking for new ways to garner results. Their job is to seek and sustain growth, as results are just another word for growth in my world, and they often only hear about the end results of a course chosen due to rigorous A/B testing.
You are only going to get float, get growth, get results if you truly run an experiment with a fine tooth comb and suss out exactly what your growth facilitating variable is. It could be that certain copy converts at a higher instance rate and speedier velocity than the other option…but you won’t know if you don’t test the copy head to head – A to B. You cannot bury your growth facilitating variable in a batch of other changes and expect to be able to reverse engineer the impetus to your growth.
Keep it simple. Keep it traceable. Keep it rigorous.
As for how I handled the human side of the equation – it came down to letting the Executive see why I was given the engagement to begin with. I know quantitative measurement and I know how to run controls and get clean data. I let the A/B1B2B3…. Tests run a few cycles before stating my original hypothesis that a head to head control and variable change test would better serve us. Then I could see exactly what was moving the needle and report back to the C-suite.
Part of being a data driven decision maker, really and truly, comes down to being a data seeking individual. Part and parcel of that is presenting fact and stats when you want to go into a qualitative debate and keeping a cool head when you know what you’re doing isn’t logical, isn’t mathematical, isn’t formulaic.
I eventually got my clean A/B experiment cycles, and we were able to test a number of changes and find what moved this Customer through the funnel most successfully with the least amount of spend on our end. I was then able to report back with degree of certainty percentages – and the numbers my friends (when implemented and derived cleanly) never lie.