I love free research–specifically free healthcare research. These studies provide insight that promotes the common understanding of the audiences who consume our information and who make important life decisions based on it. However, what I don’t love, what I really dislike, is research that either asks the wrong questions, or asks the right questions but distorts the answers somehow.
Recently I found a study from Makovsky Communications that pulled my strings a bit. I wanted to like this research, I really did, it surveyed 1,001 Americans about my favorite topic: online health information habits. Unfortunately, I found too many flaws in its design to truly believe in it. Here are my issues with the material as it stands.
Correlation is not causation. Yes, people who see the doctor more often look up health information more often. This doesn’t mean that doctor visits are sending people online, it means that people with health conditions that require more engagement will typically be engaged across channels.
What about the other age brackets? The 30-39 bracket is looking at social channels like blogs and communities at a rate of about one in four. The younger group probably has fewer conditions and so interacts on these channels less. What about the older demographics? I’m not sure why they were left off. If those numbers are low then we wonder how the questions were asked and whether vocabulary was an issue. If you ask a 65-year-old if they use “social media” you’ll get one answer. If you ask them if they use “Facebook” you’ll get another.
Is this online? or just seeking behavior? I’m left wondering if this question normalized for how many of the audience look for health information at all. It would seem likely that parents are going to look up health information more often in general, is this really a difference in online usage? or is it simply a volume issue across all channels?
Mostly good, but what about “other”? This ranking is pretty common, we see it from sources as diverse Manhattan Research, to Nielsen, to comScore. WebMD is always at the top of the rankings. The Wikipedia placement isn’t surprising either, for better or worse, it tends to get a lot of respect from patients regarding medical information. What is missing is an “other” category. Are we left to believe that 27% of the population does not look for health information at all? or were their channels simply not on the list? The typical number is roughly 80% who look up health information online so this question really could go either way.
It looks like this question was worded specifically to get low results. Pharmaceutical companies do not enjoy high trust in North American culture right now and if the question was worded in the same way as shown in the table then what we’re getting is respondents’ opinion of pharma companies, not their facebook pages. Maybe I’m wrong and this is just the subset of the audience who have seen a pharmaceutical facebook page but I’m doubtful. Also, The column on PEW Research is completely unclear.
Why would you ask only these three items? This question makes me worry that Makovsky over-uses press releases in their communications strategies. Why would the only three answers to the trust question be press releases, company websites, and social media? What about doctors? What about WebMD and other health portals? What about the myriad websites that Manhattan Research shows are trusted more? This question is essentially meaningless in its current form.
Personal recommendations are the strongest influencer. Ok, this makes sense, word of mouth, either physical or through social media, is one of the strongest motivators of action, and no one admits to being influenced by ads. Missing are the trusted advisors, though. These are the physicians and nurses with whom patients work on their issues, especially those with chronic conditions. Hopefully, this questions was multiple choice and the total of 103% isn’t a rounding error issue on an exclusive choice question.
Again, I wonder if this was an exclusive choice question. The numbers given in the press release were PCs: 90%, smartphones: 7%, and tablets: 4%. These numbers look suspiciously like 101% due to rounding errors. We know from multiple sources, such as Google’s multi-screen study, that the 50% of the US population that has a smartphone often use it with the PC; it’s not an exclusive relationship.
This question deals with early adopters. If 4% of the users answered “yes” to tablets that means that there were 40 tablet users in the study. These early adopters will likely skew the behavior results. It will be interesting to revisit this question in two years when tablets have become more commonplace. Note that if our assumption about tablets vs. PCs is correct then this early adopter issue will be amplified.
Only one portion of the journey is highlighted. The full patient journey is described by Manhattan Research in their Cybercitizen studies but this question only looks at one point in it. It’s interesting that women are relatively more likely than men to research their prescription, but fairly out of context.
Take a look at the report from Makovsky and develop your own opinions. If you agree, or better yet, if you disagree with my analysis, let’s talk below!