At SHIFT, our approach is to apply equal parts art and science to build integrated programs that help brands connect with the people that matter most. But what does the ‘science’ part of communications entail? What does it look like in action?
First and foremost, it means to be data-driven in our planning and execution; to make informed decisions based on data and research. In this series, we examine how to become a more data-driven communications professional.
Setting up our test
In the previous step, we built our hypothesis, our statement which we seek to prove true or false: Our customers dislike the burnt taste of espresso.
Based on this, we must prove true or false this statement. How might we do so? We would design and run an experiment.
To test this hypothesis, we might run a series of focus groups where customers taste test various types of espresso which have been roasted to different degrees of coffee roast. For example, coffee beans can be roasted from light to almost charcoal. To minimize other taste profiles, we might select a coffee bean with a reputation for a mild, inoffensive taste so that we test only the variable of burnt taste.
As the roast intensity increases, the burnt flavor profile will increase as well. We’d gather customer preference for our espresso in each roast grade. We might also gather secondary data, such as how long it took a customer to drink a shot at each roast grade, or what temperature the espresso was served at, or whether the customer modified the drink by adding sugar, milk, or other ingredients.
However we choose to test, we must focus on two key points:
- Document every step so that the test can be repeated in the future.
- Capture as much data as possible so analysis can be thorough.
The goal: repeatable results
The goal of any scientific experiment is a repeatable result, so that others can run the same test and replicate our results. If we document poorly (or not at all), we may be challenged by peers or competitors to prove our results, and we will not be able to. Likewise, if we don’t capture as much data as possible, we risk not being able to provide statistically-valid proof of our findings.
Another example: followers’ opinions
Let’s look at another example. Suppose we had constructed the following hypothesis: Followers of a brand don’t care about scandalous social media postings enough to stop buying from the brand.
For example, suppose a brand or a key executive says something offensive. We obviously wouldn’t want to go test this hypothesis by having our key executives intentionally say offensive things.
What we’d do instead is design an experiment using existing data. We’d examine the social channels of brands after something inappropriate was said, then measure data such as:
- Brand follower counts
- Mentions of the brand in a positive, neutral, or negative way
- Stock price of the brand
- If publicly available, sales of the brand’s merchandise
For good or ill, we have no shortage of people saying offensive things online, even in their capacity as a brand representative, so finding lots of test cases should be relatively straightforward.
Now that we’ve established how to conduct the basics of an experiment, we will next focus on analyzing the data. What will it tell us? How will we know whether our hypothesis is true or false?
Keep in Touch
Want fresh perspective on communications trends & strategy? Sign up for the SHIFT/ahead newsletter.