Has AI Got What It Takes To Be A Researcher At Magenta?

Artificial intelligence (AI) tools have been making headlines for some time now, with examples like ChatGPT able to create text and have human-like conversations, and Midjourney able to create life-like images. In many sectors there are concerns about the impact that AI is having and is likely to have. For example, in universities, academics are worried about how students might use it for writing essays and assignments, and in software development AI is already generating websites and writing complex codes, perhaps reducing the number of jobs in the future exponentially. Ex-Google Officer, Mo Gawdat recently spoke about the danger or AI and some even think AI might pose a risk of human extinction! So, what does all of this mean for the market research industry, and should we be worried?

A recent report by Qualtrics suggests that 93% of market researchers see AI as an industry opportunity, and just 7% see it as a threat. Here at Magenta, we have been investigating the latest AI tools and evaluating how they might impact our work. Whilst we acknowledge that AI will likely provide some genuine opportunities for inclusivity and productivity, right now we’re confident that it is the human and sentient touch that makes our research truly and wholly reflective of what’s really going on. We decided to test out some AI products to see how well they would manage as a researcher at Magenta. Let’s take a look at how well AI fared!

CAN AI WRITE A TOPIC GUIDE OR SURVEY QUESTIONS?

In principle, AI could be used to generate questions for topic guides and surveys; it is possible to provide AI with a basic overview of a project and ask it to write some questions. Not only that, but survey scripting websites are now integrating AI into their services so that you can generate programmed surveys on their website.

However, when we tried using an AI survey generator, a lot of the questions created were too generic to be of use. For instance, when we asked it to generate a survey about consumer compromises being made as part of the UK cost-of-living crisis, it generated a general survey about the impact of rising prices with no questions that would allow us to weigh up competing consumer priorities. What it could not re-create was the human influence on the questions, the way researchers can fully understand the nuances of what a client is trying to find out, such as asking specific questions about shopping at cheaper supermarkets or using ‘buy now pay later’ services. In this instance, AI needed more information than is typically given to provide an appropriate survey. So, although the AI-generated survey covered basic questions, the tool used wasn’t close to building the questions that would give true consumer insights, and many responses would be impossible to accurately interpret. We’re yet to be convinced an AI-led survey approach could deliver on research objectives.


Thinking about technical survey design, things the AI generated questions didn’t quite get right include:

  • Providing overly generic answer options (e.g. ‘Reduced expenses such as eating out’)

  • Only offering binary options (e.g. providing only ‘yes’ and ‘no’ answer options)

  • Omitting context questions so answers are meaningless (e.g. ‘Are you taking additional work to cover the increased cost of living?’)

  • Inability to interpret the data at analysis (e.g. ‘stopping buying’ with the answer option ‘cars/transportation’ could mean many different things)


HEY SIRI! CONDUCT AN INTERVIEW

There may come a point in the future when AI tools could potentially be used to conduct interviews or discussion groups. After all, we’re all getting used to talking to Alexa or Siri, and so why not have them conduct interviews? Participants may even feel more comfortable talking to AI rather than a human being. But we would lose the real-life, in the moment empathy and insight that Magenta researchers have when conducting interviews. Looking back at some of the past topics we’ve discussed with participants, such as the stigma associated with living with certain health conditions, use of the ‘N’ word, and how best to ask sex and gender questions to non-binary and transgender people, it is our ability to empathise with participants and ask gentle, probing questions, which ensures that our insights are empathetic and routing in consumer experience.

We are also able to recognise when participants feel discomfort and strong emotions and can take steps to manage participant safeguarding as required by the MRS Code of Conduct. Despite steps forward in AI mental health apps, AI is not yet capable of this kind of in-the-moment reactivity to human emotion. It is this human touch and empathy which ensures participants feel comfortable talking about deeply personal and sensitive issues. It also ensures that participants have a safe space to open up freely. In the past, building that human connection has led participants to thank us for the opportunity to talk about sensitive issues, and that makes our work truly wonderful. We’re confident this is something that could not happen with AI.

With qualitative research, the insight is not always based on what is said. We have spoken to those who have recently organised a cremation and those living with terminal illness. It can also be what is not said, or the body language, or contextual landscape that supports our understanding.

This does not mean that we cannot see a role for AI in supporting interviews. We’re particularly excited about a technology, currently in development, that interprets sign language in real time – a fantastic tool for enhancing accessibility in research.

FINDING THE HUMAN IN THE DATA

Another area of research where AI may be able to play a role is in analysing data. Although it’s possible for AI to analyse qualitative data, at Magenta, we believe that being able to interrogate the data in depth and add our own touch to the analysis is important. A strength of our research is that we recognise our own interests and experiences, and how we might bring these into the research process. We have a variety of experience on our team, from seasoned market researchers to former academics, and each of us bring our own knowledge and skills to the analysis. We also recognise that our own identities and experiences might impact how we analyse the data, whether that’s where we live, our gender, our age, our family background, whether we have children or pets – all of these things (and more) might impact how we understand a participant’s experiences. For example, in a study about emotional connection to pets, one of our researchers, a dog owner, was able to understand the nuances and inferences of discussions with participants by drawing on her own experiences. AI is unable to understand the nuances of choices related to pets, or how the UK drinking culture affects teenagers, or why for a parent buying a very specific brand of baked beans is so important!

Whilst we’ve seen successful examples of AI in the analysis of language, such as through sentiment analysis, methods such as discourse analysis are harder for AI to undertake. This is because discourse analysis focuses on language in context, and seeks to fully understand the nuances and implications of the language used, which at present is not possible with AI tools.

We also know that AI is based on what the internet knows, where there is a large amount of bias. For instance, when we asked ChatGPT to name 10 top pioneers in the field of market research, it only named male pioneers. The risk is that AI would reproduce these biases in research analysis, and as it is not possible for the average researcher to see the workings of AI tools, it would not be an option to sense check it.

ChatGPT listing the top 10 pioneers in the field of market research.

Finally, we would like to raise a note of caution about the privacy and data protection implications of providing AI tools with large quantities of data. Even the maker of ChatGPT, OpenAI, is constantly working on privacy policies as more and more concerns rise. At Magenta we take participant privacy very seriously and we recognise that providing AI tools with potentially identifiable and special category data to analyse could be problematic in terms of how the data is subsequently used.

AI – HIRED OR FIRED AS A MAGENTA RESEARCHER?

We can see how AI can have useful applications to enhance the inclusivity and speed of our work. However, we know that it is being able to make a human connection, and have a nuanced and contextual understanding of individuals and data that allows us to provide in-depth insight into consumer’s needs. And it seems that AI agrees with us!

ChatGPT listing ways in which AI cannot replace humans.

Previous
Previous

Do words speak louder than actions?

Next
Next

Using Nature To Inspire Our Approach To Diversity, Equity And Inclusion