The role of Synthetic Respondents in 'Human-centred' Research
"Overloaded words" are what linguists call words with several meanings, some of which are often similar, which means you can hear or read them and think they mean something different to what the speaker/writer meant.1
"Research" is a good example. A "researcher" might be wearing a white coat, doing something in a lab with chemicals in test tubes and bunsen burners. ("Real" science.) Or they could be doing something on a computer - running some data models, comparing outputs, evaluating their performance. ("Data science".)
The kind of research I'm involved in generally involves some sort of data from surveys or interviews, where I'm trying to find out something about people; how they feel, or how they behave. That could be called market research, or maybe sociological studies - but for clarity, let's call it "human-centred research".
Sometimes, you're using people to do research into something that isn't strictly about the people. For example; if you're designing a product and looking for feedback on concepts, then you're really more interested in the concepts than the people.
Human-centered research focuses on understanding human experiences, behaviors, perceptions, and needs. It directly involves or concerns people in the research question.
A simple rule: If the central research question can be rephrased to remove any direct or implied reference to human experiences, behaviors, or perceptions, then it is not human-centered research.2
For example;
- "What do people think about this idea?" seems reasonable - but really, what you're asking is "is this idea any good?" (eg. "Is this product useful?" Or "is this advert better than that advert?") Yes, you care what people think - but you can remove the direct reference to 'people'; it isn't really the core of the question, so it isn't 'human-centred'.
- If the question involves human thoughts, feelings, or actions, then it can be an implicit reference - even if people are not explicitly mentioned. "How did this advert change perceptions of the brand?" doesn't explicitly mention people, but the word "perceptions" implies thoughts and feelings (which an AI can only mimic - not experience.)
'Non-human' respondents
If you're running online surveys or conducting interviews professionally, then you're probably offering some sort of incentive to get people to complete the surveys or spend time being interviewed. That means there's an incentive for people to try and skip to the part where they collect the incentives - and a job in distinguishing the 'authentic' responses from the 'non-human'.
There's an old episode of "The Thick Of It" (series 1, episode 2) where "Mary" - a particularly articulate focus group member - is so good that they decide it would be much faster and more efficient to just... forget about the rest of the groups, talk to Mary some more, and make a tough policy decision based on what she says to them.
They later realise - too late - that Mary is actually an actress, telling them what she thinks they want to hear, and the policy decision they made on the back of what she said (which has turned out to be catastrophically unpopular) had been entirely based on a single data point that has been completely made up.
The point is that 'Mary' is a fictional character.3 Human-centred research can't be done properly without authentic respondents. If you're on the data collection side, then you should be taking steps to identify and discard this sort of data. Checking for consistency in answers (within surveys or between them), captcha tests etc.
But there's a new branch of 'research' that seems to have thought - what if, instead of treating this element of "human-centred research" as a bug, we treated it as a feature?
"Synthetic Respondents"
In the growing buzz around generative AI, a new concept in research methodologies has arisen; "synthetic respondents". Instead of asking people the questions, a Large Language Model creates 'synthetic respondents' which you can ask as many questions as you like. And they will give you answers. And they will probably sound like real people. They will never get bored. They will never try to disguise their "true" thoughts and feelings (as David Ogilvy once said, “People don’t think what they feel, don’t say what they think, and don’t do what they say.”.) You can get answers from thousands of them, very quickly and at very little costs.
(Also - they never leave behind a bad smell, and won't eat all of your biscuits.)
But again - so obvious as to be barely worth mentioning - they aren't real people. They are synthetic - "made up." Just like the 'actors', pretending to be the sort of people we actually want to talk to.
They will do it faster. They will do it cheaper. Will they do it better - or at least, 'good enough'? Well... that's the real question.
The danger here is the cognitive bias of Abraham Kaplan's 'Law of the Instrument', which can be neatly expressed as 'Give a boy a hammer and everything he meets has to be pounded.'
It seems that there are a lot of boys running round with hammers right now. If we want to get the most value out of these new research tools, it's essential that we know when to use them - and when not to.
How AI can improve Human-centred Research
There are all sorts of ways that AI can be used to improve "human centred research" - an AI chatbot can do a perfectly reasonable job of asking lots of people some questions, and I think even a cynic would agree that it seems to be at least comparable to a more traditional paper questionnaire or series of web forms (where you tick boxes, move sliders, or maybe type text into a box).
AI tools can be used to process the responses - large language models can do an excellent job of evaluating written text, transcribing recordings of interviews in minutes (that would take hours for a human), identify and summarise patterns in the data, and apply a structure to unstructured, qualitative data - making deeper analysis much more straightforward to carry out.4
Once the data has been collected, AI chatbots can help to communicate the findings - instead of being talked through dozens of Powerpoint slides (plus appendices) telling you about all the work that they did, you or your client could have a chat (in your own time, at your own leisure) with an LLM who could answer your specific questions about what they did, what they found, and what you should do about it. (And skip all the boring methodology stuff that you don't care about, because you trust that you're paying an MRS-accredited professional to do their job properly.)
But it seems obvious that if you're doing "human-centred research" that the one element that AI obviously can't - or at least, shouldn't - replace is the bit at the centre of it; the "humans" that you're researching.
Challenges of "synthetic data"
Lets suppose that you've got a tricky audience you want to talk to - smaller audiences tend to mean larger incentives to get them to speak to you, and financial incentives can get challenging if you're trying to talk to particularly affluent people. (Or, if you're dealing with business-to-business, talking to people about their profession but not paying them on a professional basis.) That makes B2B, ethnic minorities or "high net worth individuals" particularly tricky audiences to do "human-centred research" into - and therefore a particularly high differential in the cost benefits of using synthetic respondents as an alternative.
Yes - synthetic respondents will get you an answer, but thinking about where the data is coming from - the training data in a large language model - are you actually going to be hearing from the people you want/need to hear from? Or is there a risk that you're just going to hear about how the ‘out-group’ thinks about the ‘in-group’, reworded into a compelling-but-inauthentic first-person perspective?
How likely will it be that any preconceptions you personally have are going to be challenged rather than reinforced - rightly or wrongly? Especially if you don’t happen to hang around with the sort of people you're trying to learn about? If you can keep asking questions until you get the answer you want, what happens if that isn't the answer you need?
The thing is, "rich people" aren't defined by being rich. Some of them are rich because of a combination of hard work and good fortune. Some (lottery winners, for example) from pure good fortune. Some from pure hard work. And some are simply from a privileged background. There is no "high net worth individual persona" that typifies high net worth individuals.
That goes for most demographic groups, because people aren't defined by their demographics. Demographics are a good way of describing a group of people - it makes sense to say a room full of people was mostly young, mostly old, mostly white etc. - But they are a terrible way to describe individuals.
So - we're dealing with an approach that defines 'synthetic respondents' in a way that the people who they are supposed to represent would probably not define themselves, by a system that doesn't have a way to distinguish between how they view themselves and how they are viewed by others. Most likely - assuming a foundational model from OpenAI/Google/Meta - with no way of interrogating the dataset that the model has been trained on.
Balancing technology innovations and Human-Centred Research
Like a lot of people recently, I’ve spent a fair amount of time trying to get my head around AI, how neural networks and LLMs actually work, how you can use them as part of an automated process to speed things up in a way that can be validated (when that doesn't mean doing the work that the AI was supposed to do for you - as opposed to just paying lip-service to the idea), how they might make fieldwork more efficient, how they could help with analysis and reporting and so on in ways that avoid the risk of hallucinations etc. In other words, I’m certainly not an “anti-AI” kind of person. I'd go so far as to say describe some of the capabilities of LLMs and neural networks as 'magic'...
But even something that can do "magical" things can't do anything and everything. There are things AI can do very, very well - and there are things that it does badly. Giving responses that sound true is a thing AI can do very well. Giving responses that are true - well, we will see what future generations of LLM can deliver (I'm willing to suspend judgement on their future potential), but we're certainly not there yet. Can an AI ever give a "true" representation of a person that doesn't even exist?
In the same way that lots of people confuse “Generative AI”, ChatGPT, DALL-E etc. with any sort of AI/machine learning, my main concern here is that people using gen-AI in stupid ways to generate stupid data without understanding why its stupid are going to tarnish the whole field with a reputation that will be very difficult to shift. There is a related issue where "big sample sizes" has often led people to disregard the importance of representative samples. (Or to put it another way - how many men do you need to talk to if you want to understand how women really feel? What happens if you design a world based on that sort of data?)
If the moment we hear “we use AI to…”, it turns into an instant deal breaker, then the opportunity to make work faster, more efficient, or unlock new capabilities and insights is going to be missed.
Potential uses of synthetic respondents
Ok - that's obviously a pretty cynical take; synthetic respondents as an alternative to speaking to real people about their thoughts and feelings is a pretty poor substitute.
That isn't to say that "synthetic respondents" have no value at all. Any sort of evaluation should start with a benchmark, and talking to a synthetic respondent is probably better than nothing; if you can't talk to people in your target audience, then a well-crafted synthetic respondent (or maybe a group of them) is likely to be a valid improvement.
- There's a cliche about a "sample of one"; where you take your own opinions and experiences and project them onto an imaginary person. "If I were fifteen years old, I don't think I'd like TikTok very much" might well be true - but what a 15-year old Gen-X thinks isn't really relevant to understanding the culture that a 15-year old Gen-Z lives in.
Given the rise of remote working - and not being able to chat to people in the office about something you're working on - I think this is probably their most interesting use. - If you're a researcher and preparing to moderate a focus group or conduct an interview that you're expecting to be challenging, then synthetic respondents could be a good way to do a dry run, or as a training aid.
- If you're scripting a survey, synthetic respondents could be a good way to test out whether your response options capture things you might not have thought about, or to test out question routing (where the questions that are presented depend on previous responses.)
- Survey results that come from an unrepresentative sample - especially when embedded in the relevant category/industry/brand (eg. an internal advertising agency survey about advertising, or a company survey about the company's products) are probably worth comparing against a similar output from synthetic respondents. (At the very least, it would probably be interesting to dig into where and why responses differ from one group to the other.)
- Finally, there are certainly expensive research projects that only tell you things that you already knew; synthetic respondents might be worth considering as an alternative. (Although, if you knew what the findings were going to be, then maybe the problem is that it was a badly designed/briefed peiece of research, and a you'd get better results from a better project design or working with a different research partner for a fresh perspective.)
There's something that ties all of these use-cases together, and that's the idea that you're not using them to replace "real people" in a process. You're augmenting the process with synthetic personas.
There's a distinction to be made here; synthetic personas are explicitly not representing "real people", but fictional representations of a group of people. It's a kind of 'sketch' of the type of person that represents your target audience segment, that can be used for the kind of work that is not 'human-centric'.
From ThinkWithGoogle;
Personas are fictional profiles that represent groups of similar people in a target audience. They can help you figure out how to reach people on a more personal level, while delivering the right messages, offers, and products at the right time.
Synthetic respondents are a great tool to help refine, improve and 'flesh out' marketing personas.
Going back to the "high net worth" audience; it probably makes sense to talk to a synthetic persona of a millionaire startup founder, a lottery winner, landed gentry etc., especially when you don't realistically have the option of spending time with real people in those groups. It sounds like a great way to help you get more familiar with the personas - but keep you grounded in the idea that they are only personas and not real people.
Why we do "Human-centric research"
There's a famous Steve Jobs quote on 'research' that I think sums up the importance of the distinction;
"Some people say give the customers what they want, but that's not my approach. Our job is to figure out what they're going to want before they do. I think Henry Ford once said, 'If I'd ask customers what they wanted, they would've told me a faster horse.' People don't know what they want until you show it to them. That's why I never rely on market research.
Our task is to read things that are not yet on the page."
It sometimes gets misconstrued as "Apple never did market research" - but that isn't what it's saying. Apple never did product design by market research. That's how they sold a smartphone that didn't have 3G (when everyone else did), didn't have a 'proper' keyboard (when everyone else did), didn't have a carrier subsidy (so was much more expensive than all the competition) and didn't have a way of installing apps (when everyone else did) - and took over the world with it.
One of the issues with this kind of "market research" is that you're effectively asking 'normal people' to help you do your job - something that you're supposed to be the expert in.
Generally, it involves taking people out of their "natural" environment, but asking them to pretend that they aren't. This often leads to putting people into a position of 'co-creation' and getting them to think the way that you think, when you really want to know more about is what they think. You're asking "what do you think about this?" - but it can easily come across as "how would you make this better?"
(Or to put it another way; if you want to design a car that Homer Simpson would want to drive, you should talk to a professional designer - not Homer Simpson.)
So - how do you do that without asking people what they want?
There's another Steve Jobs quote - from a one-pager titled "Apple's Marketing Philosophy" - that lays out the first principle as empathy; an intimate connection with the feelings of the customer: "We will truly understand their needs better than any other company."
I'm not sure if there is another single product that has generated more revenue than the iPhone. Because Apple didn't just know what people wanted from a smartphone before they knew themselves - they designed it, built it and shipped it (while the rest of the industry laughed) before anyone else had really figured it out.
I don't think chatting with bots will ever get you anywhere near that sort of goal.
How AI should be used to improve human-centric research
AI tools offer some truly revolutionary possibilities for research, but they cannot - and probably never will - replace the kind of understanding that can be gained from authentic, human-centred research.
The key to success in using AI tools for research will be in thoughtfully integrating these tools into workflows to enhance and augment authentic human empathy and insight.
-
For example, a 'linguist' is someone who is skilled in speaking multiple languages - which isn't the same as studying linguistics. ↩
-
Rewording a sentence is a great example of something that a Large Language Model can do very well; ask ChatGPT to rephrase the question and see if it still describes what you're really trying to answer. ↩
-
To be clear; within the fictional world of The Thick of It, "Mary" is a fictional character, played by an actor (who is also a fictional character in The Thick of It). ↩
-
I strongly suspect that most of the words that get typed into the "Other- please specify" boxes on surveys never get read - which suggests that there is a huge opportunity for AI-related improvements to human-centred research processes. ↩