Television, computers, and smartphones—people use a wide range of media every day to obtain information. Yet, it's been a challenge, both technologically and cost-wise, to gather accurate information about these different types of media. However, the single source panel now enables us to get a more accurate cross-media view on a person's lifestyle, thus creating the need for marketers to rewrite their strategies.
Traditionally, people obtained information from mass media, including news outlets and entertainment programs, as well as from companies about their brands. There was a clear line between the providers of information and the recipients, and people formed perceptions based on the information received and shared it with others before deciding what action to take. Marketers were therefore assured that the information people received was exactly the information published.
But with information dissemination now digitized, the channels through which people obtain their information change by the minute. Services that were widely touted by the media only a few years ago have now disappeared without a trace. In our era of perpetual change, it's nearly impossible to develop a marketing strategy based on the “fixed” information eco-system of the past. Today, what's critical is whether marketers can remain a step ahead and understand and work within the complex media shifts that are happening before us.
This is where the single source panel comes in. A new methodology (built by a research company), the purpose of this panel is to provide clear, meter-based log data so that marketers can formulate campaigns that are more accurate.
Single source panel accuracy, for an accurate collection of people’s information behavior
What do we need in order to understand the ever-diversifying information channels more accurately? Is it big data analysis? Or is it qualitative surveys, such as ethnographic and behavioral observations? Both are important, but our understanding of the current diversified sources of information (i.e., traditional methods) are limited at best. Similarly, the single source panel collects and analyzes information from people’s behavior-based log data, which is a clear indicator of the diverse ways in which people now consume information. As opposed to questionnaires, log data to shed light on how people interact with information across their TVs, computers, and smartphones.1
Take, for example, a 30-year-old male office worker who watches news on his TV in the morning, checks his SNS on his mobile device during his commute to work, browses a blog review about a venue for an upcoming welcome party during his lunch break, then watches a pre-recorded TV show after returning home that night. The single source panel can understand these patterns of behaviors through the log data and turn it into useful information.
Of course, we already have access to data on TV and online exposure, but it was split between media and devices, preventing us from tabulating the data as a whole. In other words, we couldn't even connect the data to people who only watched ads on TV, only watched ads on YouTube, or watched ads on both. The single source panel, however, allows for precise cross-channel analysis.
Behavior records reveal that human memory is unreliable
Why stress the importance of log data-based records? One can claim that it is just as accurate to measure ad exposure (or non-exposure) via a questionnaire. The single source panel research, however, proves otherwise. A survey among single source panel panelists who were exposed to a particular TV and online banner ad were asked via a questionnaire if they recalled having seen the banners. The charts below illustrate the results of this questionnaire.
When individuals who were actually exposed to ads were questioned about their impressions of the ad, 56% of individuals stated that they hadn’t seen the TV commercial while 66% of respondents stated that they hadn’t seen the online banner ad.
In addition, the same questions were posed to “unexposed panelists”—those the log data indicated had not been exposed to the ads—and many of these panelists answered that they had, in fact, seen the ads.
These results illustrate why questionnaire surveys that depend on people's memories are unreliable. Formulating a marketing strategy and deciding on how many marketing dollars to invest based on unsound data is risky at best. The great value of analyzing behavior log data drawn from a single source panel is that we can more accurately understand media exposure from a cross-media perspective. We can also use the data to help correct any biases residing within a marketer's mind.
Is the hypothesis biased?
Example 1: Online media vs. offline media
We often hear, "The amount of time a person has in his/her hands is limited. People who watch a lot of TV will have little time for online media, and those who watch a lot of online media don't watch as much TV. Hence, media exposure is a zero-sum game." Well, what about this alternative idea: "People who crave information are active users of both TV and the web. They overcome a finite amount of time by using multiple media simultaneously".
If the former explanation is true, then TV and online use have a negative correlation, whereas if the latter is true, the correlation will be positive.
As the above results show, there is no clear correlation between TV and online watching (and therefore, ad exposure); the length of time spent watching TV and the time spent on online media are not related. On the contrary, some people watch a lot of TV and also consume a lot online media, while others may watch a lot of TV but don’t consume as much online media as their peers. Both of the aforementioned hypotheses are correct, but they do not fully encompass the varied and fragmented media consumption behaviors.
Example 2: Computers vs. smartphones
In Japan, the computer penetration rate is 88% while the penetrate rate for smartphones is 46%.2 How do users split their viewing time between watching YouTube videos on computers vs. smartphones? Is the divide "smartphones outside, computers at home"? Or is it "smartphones both at outside and at home"?
The results show that not all people who own both a computer and a smartphone watch YouTube on both devices. The number of people who tend to mostly watch media on a smartphone is about the same as those who have a smartphone but usually watch on a computer. The most surprising fact is that even among respondents who own both devices, not many watch YouTube on both devices.
Data-based facts are marketing's foundation
Due to bias in respondents’ answers, accurate user behavior information is difficult to obtain through questionnaires. And since respondents tend to use TVs, computers, and smartphones, it is plausible for them to say that they use all three devices to watch YouTube. Another possible reason for unreliable data is that since marketers want clear-cut answers when conducting a questionnaire, questions are generally designed as "either/or" questions with only two options to choose from, such as "TV or the web" or "desktop or mobile."
We have looked at a number of examples concerning the efficacy of data analysis using behavior-based log data that is derived from a single source. The findings are clear: When we formulate a marketing strategy, anything not backed by data is merely a hunch, a “gut feel”. Of course, hunches based on a marketer's experience are important, but as consumption of information media grows at an exponential rate, decisions made according to personal experience end up biased. To avoid this, it is imperative to first analyze the behavior-based log data and to confirm through data that the hunches and hypotheses are correct. If predictions can be verified with facts, then there should be a high level of confidence in the chosen strategy. Results that differ from the hypothesis don’t necessarily become a disadvantage, but can be interpreted as signs of behaviors that are changing at that very moment.
This article was originally published on May 15, 2015 in Japanese.