Analysing Qualitative Data with Software: Leximancer

markersI’m currently spending some time “down under” as visiting academic at James Cook University in Queensland. One of the great things I enjoy most about  visiting (and working with) academics half way around the world is that I get exposed to some rather nifty tools – like a software used quite widely in Australia for qualitative text analysis, but unfortunately not quite so well known outside of Oz.

If you have ever worked with big quantities of qualitative text and tried to categorise it, you are probably familiar with the “joys” of NVivo: it is a manual and very, very time consuming way to categorise what is in the data. As technology has progressed, new methods of “sorting” through lots of text have emerged: for example text mining, such as the tools provided by the National Centre for Text Mining in the UK – something which I used to identify “the big themes” in Social Marketing research in this article in Social Marketing Quarterly. Fast forward to 2013 – and Text Mining seems almost a bit dated, as with modern software, such as Leximancer, we can now not only produce quite boring tables of frequently used words – but actually create concept maps, showing how words form concepts – and how “closely related” these are on a map. See, for example, the bubbly “concept map” above: It visualises different “concepts” talked about by two focus groups (one from Bristol the other from Southend in case you are curious). Words that appear close to each other were mentioned often relatively “closely”: For example, “walking” and “heart” (as in walking is good for your heart). As walking and heart also appear close to the “tag” Southend, this suggests people from the group in Southend talked about this much more – than people from Bristol (who talked much more about smoking, for example). On the other hand, both groups talked about “time” [it takes to do exercise] similarly often – as it is almost equidistant between the two “tags”.

Using such a tool can quickly show, for example, if groups use similar words and word combinations to describe a concept – or if they use quite different words and ideas when they talk about a common theme. The software has also been used, for example in this paper, as a way to review literature – and compare different definitions (in this case of Social Marketing).

While Leximancer does make it quite easy to mine and visualise large data sets (and infinitely easier than trying to “code” them with tools like NVivo, it does however have some limitations: It is, of course, a piece of software, rather than a thinking, breathing human being. On the one side that is an advantage, as it isn’t susceptible to all sorts of biases and effects like the human brain when trawling through large amounts of texts. On the other hand, it does simplify concepts and doesn’t have a real brain: so some results may be a little random – and need the researcher to look out for how this occurs: For example wife and hospital occur frequently together. This may be, for example, because several participants in the group talked about their wife’s experiences in the hospital – rather than suggesting that somehow having a wife is inherently linked with going to hospital (I’m sure there is a joke in here somewhere…). Also, Leximancer (and text mining) does work at a quite abstract level – so if you are looking for surprising details in a large body of text, it is quite likely that the software will not pick it up. For this, you still need to read all of the text. But if a research question is to look for related concepts and thematic clustering – and representing this in a visually pleasing way, then Leximancer can a massive time saver.

 

You may also like...