If you pick a customer at random, chances are higher that they are pretty far from the average. The average is still the same, but quite a few people spend more or less. Here, people vary more widely in how much they spend. So it’s less likely that you’ll select a sample that looks vastly different from the total population, which means you can be relatively confident in your results.Ĭompare that to the chart on the right (with more variation). Some people spend a few dollars more or less, but if you pick a customer at random, chances are pretty good that they’ll be pretty close to the average. In the chart on the left (with less variation), most people spend roughly the same amount of dollars. Each expresses a different possible distribution of customer purchases under Campaign A. Variation is a little trickier to understand, but Redman insists that developing a sense for it is critical for all managers who use data. Other newsletters or opt out at any time by managing your Of course, showing the campaign to more people costs more, so you have to balance the need for a larger sample size with your budget. All else being equal, you’ll feel more comfortable in the accuracy of the campaigns’ $1.76 difference if you showed the new one to 1,000 people rather than just 25. The same is true of statistical significance: With bigger sample sizes, you’re less likely to get results that reflect randomness. The more times you flip, the less likely you’ll end up with a great majority of heads. Think about flipping a coin five times versus flipping it 500 times. Redman notes that there are two main contributors to sampling error: the size of the sample and the variation in the underlying population. This is called a sampling error, something you must contend with in any test that does not include the entire population of interest. But in reality you may have been unlucky, drawing a sample of people who do not represent the larger population in fact, maybe there was no difference between the two campaigns and their influence on consumers’ purchasing behaviors. This $1.76 might seem like a big - and perhaps important - difference. When you run the results, you find that those who saw the new campaign spent $10.17 on average, more than the $8.41 those who saw the old one spent. You can’t show it to every single target customer, of course, so you choose a sample group. You’ve come up with a new concept and you want to see if it works better than your current one. Consider the example of a marketing campaign. When you run an experiment, conduct a survey, take a poll, or analyze a set of data, you’re taking a sample of some population of interest, not looking at every single data point that you possibly can. When a finding is significant, it simply means you can feel confident that’s it real, not that you just got lucky (or unlucky) in choosing the sample. “Statistical significance helps quantify whether a result is likely due to chance or to some factor of interest,” says Redman. He also advises organizations on their data and data quality programs. To better understand what statistical significance really means, I talked with Thomas Redman, author of Data Driven: Profiting from Your Most Important Business Asset. And yet because more and more companies are relying on data to make critical business decisions, it’s an essential concept for managers to understand. This is an important distinction unfortunately, statistical significance is often misunderstood and misused in organizations today. When you run an experiment or analyze data, you want to know if your findings are “significant.” But business relevance (i.e., practical significance) isn’t always the same thing as confidence that a result isn’t due purely to chance (i.e., statistical significance).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |