Measuring Advertising: Forced Choice Comparisons

conjurer_bosch

Over the summer I’ve  done a bit of work regarding measuring advertising – including reading around how to measure advertising (or more general marketing comms) effectiveness. This included a rather interesting study of comparing different outcomes depending on whihc measures I used – or in other words, comparing what appeared to be a big outcome from one measure with results when using another measure. The outcomes are pretty different (as could be suspected!) – but I thought it might be interesting to have an overview over the measures and how they compare. Some of the measures are very specific to Social Marketing – though may be useful for commercial marketers. Anyway, over the next few days I’ll give a quick run down of the different measures – their positives and negatives and a few hints for using them.

The study I was conducting was a comparison between a Portuguese sample and a British sample (comparing various message frames related to either smoking  or condom use). The easiest way of comparing two samples is to use a “forced choice comparison” –  in other words ask people to make a choice between advert 1 and advert 2. It’s not frequently used in advertising, but more so in cross-cultural research, often in the form of presenting a dilemma and then “forcing a choice” between different solutions. Smith, Dugan and Trompenaars (1996) point out that it is a widely used and well-established method for conducting cross-cultural comparisons.
The positive is that it is pretty quick and easy  – but it also has a lot of negative consequences one should be aware of.

The first potential problem is the choice of comparison items: In cross-cultural comparisons they usually tend to represent opposite constructs (e.g. a- the individualist solution/b- the collectivist solution). The researcher can then compare the amount of people from a country that chose A or B, and derive the level of individualism/collectivism of the country. As for advertising, this works pretty well for very artificial (and extreme) adverts – for example those that are specifically made up to be either A or B. For example one could compare a gain-framed advert (You will be healthier as a result of XYZ) with a loss-framed advert (You will be sick because of …) – but for the comparison the one must be gain-framed only, the other loss framed only.  Neither one is really realistic in real life.

The other consequence is that because we are forcing people to make choices we get results that are pretty large – and therefore often meaningless significant. For example, in the Portugal vs Britain study we tested eight different constructs. Six of them turned out to be technically (or statistically) significantly different.  The problem: Only in two of them did the respondents in one country actually prefer (by “majority voting”) an advert over the other. So it would have been easy to make all sorts of outlandish statements as to the difference between Portugal and Britain, when, in fact, by majority, there was only a difference in two out of eight cases (and six cases were the same).

The latter also causes an interpretation problem: For example, in one case, 58% of respondents from Portugal preferred the individualist ad, while the same ad was preferred by 91% of British respondents. This is a pretty big difference of 33% (or, if you are statistically minded of @font-face { font-family: “Arial”; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: “Times New Roman”; }div.Section1 { page: Section1; } X2 =62.42 and a p=0.000) – so definitely a pretty whooping difference. BUT, in both cases the individualist-framed advert won. So what does that mean? Is it ok to use an individualist advert in Portugal and the UK (even if 42% of the Portuguese would prefer a collectivist one)? You can see the problem!

In other words, the forced choices are  a good “eyeballing” technique to get a feel about differences and preferences – but pretty much anything you see, even in a big sample, will be difficult to interpret. You also need to be very wary of large/significant differences from the data – only because SPSS says it is significant, that does not mean it is a meaningful difference!

Smith, P., S. Dugan, and F. Trompenaars. 1996. “National culture and the values of organizational employees.” Journal of Cross-Cultural Psychology 27(2): 231-64.

You may also like...