A well-informed strategy of any kind must be supported with reliable data. No matter the industry or organization size, if a company makes decisions based on bad data, it’s likely to adversely affect the business and its revenue. Yet in marketing and PR, we are all too familiar with the headstrong executive that insists on changing a campaign or overall strategy based on statistics in a single article they read in an airline magazine or newspaper over the weekend. Herein lies the problem (and our question): did anyone validate the research in the article?

Our industry is inundated with research, not all of it credible. As our technology and analytics capabilities improve, the information from which we can draw conclusions is expanding exponentially. Many marketers package up their cherry-picked statistics with a flashy headline and publish the “insights” with no real regard for where the stats came from or the way the data was collected. The stories spread, citing the company that originally published it. That’s okay, right? Not quite…

Over the last few weeks, the Internet has been buzzing over Gallup’s latest poll claiming that 62 percent of social media users say social media has “no influence at all” over their purchasing decisions. The report was covered in top tier business publications like The Wall Street Journal, Businessweek and Time, as well as marketing industry publications like AdWeek and Marketing Land, not uncommon for Gallup reports.

What’s interesting about this coverage is that many of the stories pointed out flaws in the study in an attempt to discredit the results. We don’t disagree with the few critiques of the report as a whole (like reporting on data that was over a year old), but Gallup’s primary research survey questions and tactics were valid and well executed. Other stats within the report, however, were not the result of Gallup’s own research (see below).

gallup1The same issue of reporting on questionable data was true in Mary Meeker’s 2014 Internet Trends report published in May, which unlike the Gallup report, was praised in the media.

But again, when we take a closer look, we see that Meeker’s data came from a variety of sources, and not all credible. In the screenshot below, it shows that these statistics are attributed to Shareholic. Checking the fine print, it indicates that the numbers are only a percentage of total referrals across the Shareholic network, which is not an accurate sampling of total social media users.

gallup2The solution is to stop the spread of bad data by checking the sources citied within the reports themselves. It is perfectly acceptable to use secondary research to back up your marketing and PR strategies, as long as you can verify the source and research methodologies. It can be as simple as clicking on the referring link within your article, which should lead you back to the reporter’s original source. At that point you’ll need to validate how they did their research (the subject of a future blog post).

It’s important, however, that you use the research and do your own work to figure out if the data from secondary sources is truly applicable to your company.

If the secondary research isn’t well supported or if you’re unsure of how it was created, then ultimately commissioning your own research will have the best and most trustworthy results (if it’s done with the AAPOR Code of Ethics in mind). You’ll have to do your own work, and that’s the best way to make sound decisions that will ultimately affect your business.

Keep in Touch

Want fresh perspective on communications trends & strategy? Sign up for the SHIFT/ahead newsletter.

Ready to shift ahead?

Let's talk