In the world of marketing, you should always be aiming to determine how and why your audience responds in certain ways. We are always constrained, however, by budgets and timelines that we don’t always have control over. So if you’re implementing something new, from a new item offered at arena concessions to new software that amps up an existing product, you may not always be able to conduct surveys or interviews to determine the type of impact that has been made on your customers. In cases such as these, it may be useful to measure the effectiveness of your implementation by viewing the sales numbers before and after as quasi-pretest-posttest experiment results.
The traditional view of a pretest-posttest experiment usually has it consisting of surveys, interviews, or focus groups before and after exposure to the new factor that has been implemented. But if you are working under constraints that do not allow for this, you may still get useful information by comparing before and after-sales figures. None of this is revolutionary, of course. It is not only logical but almost a given that you would compare how a change affects your bottom line. Keep in mind, though, that viewing this comparison as an experiment can give you perspective on two things. The first is how you make the comparison. Beyond simply seeing whether the post-implementation sales numbers are higher and calling it a day, determine whether that increase has statistical significance. The second thing is the realization that, in experimental terms, you may have little to no control over the presence of external variables and therefore need to try to account for them.
Considering these two advantages of viewing before and after comparisons as experiment results is only the first step, of course. This idea is subjective and very dependent on your particular industry, product offering, and specific circumstance. But if you want to begin looking deeper into this approach, take a look at the sources below.