If you have done quantitative research of any sort in India, you would have noticed ‘show card’ written on the questionnaire and noticed the interviewer clumsily thrust a wad of papers at the respondent. Ever wondered why it is a ‘show card’ and not ‘show paper’? To answer this, let me go back about 25 years and talk about the first consumer research I witnessed.
The research was a sequential monadic one with paried comparison at end for a white detergent bar. It was most probably for Hindustan Lever Limited (as Hindustan Unilever Limited was known back then). Don’t remember who the research agency was but the thing I clearly remember was how professionally it was conducted. The entire research took a few weeks – about 2 months if I remember right. There was ample sample given to test – ~4 big bars of each type which would easily last a couple of weeks or more. The researchers asked if my mom could wash a few clothes with the samples given before them, just to be sure of her washing method. And, there were always 2 researchers administering the questionnaire, one asking the questions and the other overseeing and manning the show card. The ‘show card’ was a set of A4 cards with the responses which were neatly laminated and spiral bound. Both the interviewers were extremely diligent in asking the questions and noting down the answers that my mom gave. The gift for participating in that research was a casserole of some sort.
That was the last I saw any ‘show cards’. 12 years later, when I oversaw my first research, they were ‘show papers’ but still called ‘show cards’.
Now contrast that research to a similar one done these days… The entire research is extremely hurried – often a sequential monadic test is completed within 2 weeks. A very small sized sample is given to test – ~30ml shampoo or ~50gm of toilet soap – which is hardly enough for a few uses. Not many check the actual usage in such quantative researchers, even on a sub-set of the sample. And, only one researcher manages the entire interviewing process. The gift given at the end is pretty much the same, a casserole of some sort.
To give other examples where I was a respondent… A researcher knocked on my door one day to check what kind of DTH connection I used. I said, “Tata Sky” and she asked my name and promptly replied, “Thank you, Sir. If some one comes to check, please tell that I interviewed you. I don’t want to waste your time.” It’s anybody’s guess what data was filled in as my response to that lengthy questionnaire she carried. BTW, I disqualify for such type of research as I am in one of the professions that is in the research exclusion list and she didn’t even check that part.
The worst was what I experienced in the Government of India census survey 2011. Either my wife or visiting parents gave the responses for the main census (which was largely filled in using the society member data with the chairman) so it’s hard to guess the quality of data capture. And, between the main census and the caste census, I shifted houses and misplaced the slip they gave that identified us and our responses in the main survey. I was home when the interviewers came and all they asked was for the slip (which I couldn’t give) and paste a stamp on the door stating the caste census was completed.
Even with this broad, anecdotal comparison, its easy to see how the quality of research has gone down substantially over the years. While I am not sure when and how this happened, I suspect it has a lot to do with companies trying to cut costs at one end and heavy cost competition among the research agencies at the other. This meant cheap, contracted field teams collecting data on ground and low quality researchers analysing and presenting it at head office. Overtime, this cost pressure has also lead to dearth of good talent in the field. Today, as newer types of research fields open up – data analytics, data science etc., – the best talent (statisticians, researchers) go there.
I think this ‘quality of research’ problem is very severe today. Sample these…
- Indian Readership Survey: Several newspapers rejected 2013 data questioning methodology of the new research firm that conducted the survey.
- Television Audience Measurement: BARC got setup citing inadequacy of data capture by TAM. Then, a few years later both merged.
- FMCG Retail Audit Measurement: FMCG firms and Nielsen that conducts this survey have a love-hate relationship with retail audit data being questioned multiple times in the last few years – 2009, 2015.
The above issues are just a few about large scale researches. I am sure every brand manager has questioned and/or rejected findings of a smaller quantitative research citing ‘bad data collection’ once in a while. I am specifically stating ‘data collection’ because not many brand managers and even research managers understand the statistics behind the researches – the sampling methodology, the rating scales, the analysis techniques. Hell, many don’t understand the research methodology/design in its entirety. And, that’s where the bigger problem lies – people aren’t questioning the basics.