Episode 74

How to use behavioral science to influence people and understand consumers

Richard Shotton, behavioral scientist, author of The Choice Factory & founder of Astroten, discusses the power social proof and biases hold, where to apply behavioral sciences to better understand consumers and drive change and how to best maximize productivity with AI.

The interview
The transcript

Ryan: Hi everybody and welcome to this episode of Inside Insights, a podcast powered by Zappi. My name is Ryan and I'm your host and I am joined today by Richard Shotton, a behavioral scientist and the author of The Choice Factory. I had a chance to meet Richard properly a couple weeks ago where he gave an amazing keynote at our Connected Insights Conference.

Ryan: So I had to have him on the podcast so we could talk about behavioral sciences and technology together. Richard, thanks for joining. 

Richard: Well, thank you for having me. Good to be here. 

Ryan: It's a pleasure. I was giving Richard sh*t about this earlier. If you've read his books, you know he drinks lager. I also drink lager and here in America, people often judge me for this, you know, like the craft beer movement.

Ryan: Many years ago, I would have told you I was a Miller Lite drinker. That's what I like to drink. And one of our salespeople said to me, you can't tell the difference. And I was like, I sure can't.

Ryan: I had heard from a professor at university: People don't drink beer, they drink advertising. Which I thought was interesting. So we put this to the test. We got four red cups and we poured Bud Light, Miller Light, Coors Light and I couldn't tell the difference. But the bet we made was if he was right, I had to switch to Bud Light in perpetuity. So that was about eight years ago. 

Richard: Yeah. And I'm guessing…

Ryan: Yeah. I drink Bud Light. 

Richard: That point about you drinking the advertising is backed up by evidence. There's an awful lot of experiments that show what we expect to taste affects our actual experience. So one of the famous studies is by Ragunathan, I think, Macomb's business school in Texas. And he gets a group of Americans, and serves them a buffet of Indian food and gets them to rate all the different elements, the samosas, the curries, the naans, but doesn't care about any of the ratings.

Richard: They're all just a sideshow, a smokescreen. The only thing he cares about is their ratings of the lassi. So the mango lassi, this is an Indian yogurt drink. 

Ryan: Okay. 

Richard: And half of the people,he has told the lassi is an unhealthy drink. Half the people he's told it's a healthy drink. And when he looks at the ratings, he finds that those who think it's healthy rate the product significantly worse than those who think it's unhealthy.

Richard: So the people who think it's unhealthy rate it about 50, 55 percent or higher. And his argument is, especially in America, and you don't have this, don't find this effect in France, I would suggest you probably do in the UK where I'm from. But in America, there is an assumption that being healthy is going to be bad tasting.

Richard: So people have this negative attitude. That becomes self fulfilling and it affects the actual taste of the product. So his study was on labeling, but you find the same things around price, brand, receptacle people drink through. However you set a positive or negative expectation from the surrounding information, it will affect the actual taste of the product.

Richard: So yeah, I think that lager test is a really interesting one. People think they make a judgment on taste, half the time it's on superfluous factors. 

Ryan: It is. The study reminds me of how even as adults, you can see your inner child come through like, you know, kids, we need to eat healthy tonight.

Ryan: And the first thing they do is complain. I don't want to eat vegetables and vegetables are delicious. But of course, as a kid, you don't know that. Um, so it isn't that sort of expectancy. It's interesting to me. It probably explains why so many people drink matcha, because for the life of me, I can't explain why anybody would consume that, but it's healthy.

Richard: Yeah the other bit there, you could say, well, maybe the fact it tastes foul, and I don't know. You know, something got your first year, but as far as I'm concerned, it tastes like you're drinking grass. I would imagine that the assumption works the way around. Well, if it tastes this bad, surely it's, uh, it's good for me.

Richard: So yeah, there's, this is, these are fascinating findings that brands could use. And if you're a brand, be very, very careful about telling people if you've taken out the additives or the sugar, because people will think. You've made the product taste worse, even if there's no change, you might want to be silent about it.

Richard: Or if you're a health brand, maybe, you know, don't, uh, chemicals in to try and mask some of the, the unpleasant flavor flavors, actually that bad taste might emphasize your kind of credibility as being a health brand. 

Ryan: Your whole point. Yeah. It's interesting. Cause like a lot of times we'll, you know, we test new ideas all the time.

Ryan: And, and I can't tell you how many people upload ideas with like every bullet point imaginable to describe their thing. And I think you offer some really great advice here. Like be really choiceful of not only what you say, cause there's only so much real estate and headspace, but the impact, what you say is going to have on people.

Richard: There's an amazing campaign from Canada. I don't know how well known it is in the U S. It's not very well known here in the UK. There's a cough syrup called Buckley's and they had this amazing journey from being a small player to rocketing in the 1990s. And that was when they changed their slogan.

Richard: And the slogan was, it tastes awful and it works. They emphasize the foul taste because they knew people assume there are trade offs. And if you emphasize it's really bad tasting, rather than like most brands would do, try and hide it away. People assumed, well, if it's that bad tasting, it's going to be amazingly potent.

Ryan: Yeah, that's, that's really smart. Okay. So I want to have two sides of a conversation with you. The first is inside a company and the second is how companies show up with customers. So every company out there that you and I consult with and do business with is on some digital transformation journey somewhere.

Ryan: And they're having varying degrees of success, despite the fact that they all say they're doing it and they're all still customer centric. And a lot of folks that listen here to the podcast are people who are trying to drive change, to elevate customer centricity in their business and ultimately to elevate the impact they can have.

Ryan: And I think a lot of people listening would rather focus on behavioral sciences strategy and growth all day than running market research. And I sort of live by the world view. If you get the stack in place to do market research, then the really smart insights people can do the fun work. But people, process, change sort of get in the way.

Ryan: And so what I wanted to do is have a chat with you about where we can apply behavioral sciences at work to better drive change in our jobs. And I think we can probably have a few different principles we can talk through, but I guess I'll turn it to you. What are some pieces of advice that you'd call to people's attention if they're either being forced to change or trying to drive change themselves?

Richard: Yeah. The first general point I'd make is the. Insights into consumers that behavioral science generates, generally they're just as relevant for professionals. So, probably the most famous body of ideas in behavioral science is this idea of social proof. If you want to change behavior, emphasize that it's a popular behavior, lots of people are doing it.

Richard: And that's been proven again and again and again in the consumer world. But, it's just as powerful amongst professionals. There's a 2018, might be 2019, study by the Behavioural Insights team and the Australian government where they tried to persuade doctors to give out fewer antibiotics because there's these huge long term health risks.

Richard: And sometimes they sent out a letter saying, don't give out so many antibiotics, it's really bad, here's all the bullet point reasons why it's going to cause a problem. And there was a 3. 2 percent reduction in prescription rates. Other people, and we're talking about a large sample of 6,000 people, so it's not a problem.

Richard: Assamblishing. Other people were sent, other doctors were sent exactly the same educational message, but they added one line saying you are giving out more antibiotics than 85 percent of other doctors. And when they did that, there was a 9. 3 percent production in prescription rates. So even amongst this group, these, uh, doctors, even people who define themselves as rational, sensible, logical based decision makers, even they are deeply, deeply influenced by behavioral biases.

Richard: So I think the first thing you've got to do if you want to change the behavior of others, don't think all these studies that have worked on consumers are irrelevant and I can't use them when I'm trying to play with my colleagues. They're just as relevant to your colleagues. And I would say, you know, that social proof point is definitely something that can practically be applied in your colleagues.

Richard: If you think about most attempts to change behavior and not this, they fall guilty. of using social proof in the wrong way. So what a lot of campaigns do internally is say to people, you know, none of you lot are filling your timesheets out or no one is undertaking research in the way they should. The danger of that is what you've got to have done is use social proof, but exactly the wrong direction.

Richard: You have emphasized that lots of people are misbehaving and all the evidence from Cialdini and others suggest that if you emphasize the scale of the problem and how many people are misbehaving. You remove a sense of transgression and you make it even more likely that behavior will happen. So the first thing to do, I would say, is make sure whatever you want to encourage people to do, make sure you emphasize the volume of people who are already positively doing it, not the volume of people who are misbehaving.

Ryan: It makes sense and it resonates with me. I've had the benefit of helping a lot of companies use technologies better. Um, and I think One of the principles I've always thought is to go where there's momentum first. And so if you can get the United States, China, Mexico, U. S. to adopt.

Ryan: At some point, when you're sitting with your French colleagues, you can say, you can use this, right? Exactly what you're saying. Hey, did you know that you're actually the only region who's not leaned into our program yet? Yeah, yeah. That FOMO that is, is real. 

Richard: Yeah, and you can see people doing that on a wide scale.

Richard: So I always think Zoom's a nice example. If you look at a lot of Zoom advertising, they no longer talk about the magic quadrant or whatever they used to. They talk about millions of users. Yeah. But when they first launched, they of course had a problem that they didn't have millions of users by definition.

Richard: So what they did was pick a single sector that they could quickly scale in. They picked education rather than charging 9. 99 a month, it was 99 cents. So they gave this ridiculously good value deal to educational bodies. Then when they've got momentum, they could talk about being the leading educational provider, and that made them even more appealing in that sector.

Richard: And then once they've got a decent weight of users, they could then go out and use that number to appeal to other sectors. 

Ryan: But there's also a really interesting lesson there of the inclusion, you give out more than 86 percent is better than saying, tsk, tsk, you're bad. And that brings me to something I'd love for you to explain a little bit to me.

Ryan: So, I've fallen guilty of confirmation bias, and I see a lot of, and I see it, right? Like, I end up surrounding myself with heads of Insights who also believe the same thing. 

Richard: Yeah. 

Ryan: And then I see them fall victim to, why doesn't everybody get this? Or, I'll give you a real example. We created Insights Maturity Framework for our book.

Ryan: And it was one of the most powerful things that I've done because it reminded me oh, 70%, so there's three levels of maturity that we articulate. 70 percent of the people that we do business with are still at level one. And I had this, Oh my God, I've had this blind spot for five years while I was literally authoring a book.

Ryan: And it's like, ever since then, it's really helped me better meet people where they are. But I guess if you're running a department and you're driving change, how do you avoid the bias that you're just seeing and you're seeing things that you naturally agree with and don't miss where the dissonance actually is so that you can address it?

Richard: Yeah. Because there's almost two different biases I think at play. There's the false consensus. This was first experimented on by Lee Ross back in 1977. And then there's confirmation bias. The false consensus effect is essentially, we assume that others are more like us than they are. We assume that the beliefs, the attitude, the experience we have isn't universal, we're not stupid, but we assume it's more commonplace than it likely is.

Richard: So for Ross's study, it's quite a clever thought experiment. recruits group of people and says, look, imagine you've driven through a 30 mile an hour zone in your car and you were doing 35 miles an hour, you're stopped by the police, you're given a ticket rightfully but when you get home, you look at the ticket and you realize there are all sorts of administrative errors.

Richard: So the maker model's wrong, the date's wrong, all sorts. So you could probably appeal it and you might get off. So the first question is, Would you appeal because you've got this chance to avoid paying a fine or would you accept it because you were doing the illegal deed? So people answer. And then the next question, this is the nub of the experiment.

Richard: He says, what do you think other people would do? And what he finds is if people would appeal the fine, they assume the majority would also appeal. Whereas if people would contest them fine, best you and the majority would contest the point. So what he argues is that whatever attributes we have, we overestimate their popularity. So if we vote for Kennedy. It's not that we think every other voter votes Kennedy.

Richard: We will assume that far more Americans do that than is actually the case. So for this one, I think the key point is recognizing that we are not the market and trying to actively go out and challenge our opinions and look for ways to put them to the test. Because what often people do is assume that their behavior is commonplace, and therefore they don't even consider testing it.

Richard: So I think that's your first issue. The second issue is what you talk about is confirmation bias, which is even more pernicious. Which is an argument that Leon Festinger did an awful lot of experimentation around. And his argument was we don't interpret, we don't interpret information neutrally. We interpret it through a lens of our feelings for the communicator.

Richard: That someone you dislike tells you an argument, you spend. an awful lot of time thinking of mental arguments about why that information just isn't true. But if you get exactly the same message from someone you admire, you're far more predisposed to it. Now, Festinger didn't just isolate the problem, he also looked at potential solutions, potential moments when people were more open to change.

Richard: So it's a slightly strange study, but Festinger recruits members of college fraternities. These were slightly controversial. educational bodies that people could join. He gets members of those fraternities to come to his lab where he plays them an audio argument about why those fraternities are morally wrong.

Richard: Some people just listen to the audio argument with no distractions. Other people listen to the audio argument whilst they are forced to watch a silent movie. And later on Festinger questions all the participants about how far they've changed their opinion. It's the group who were distracted who were more likely to change than the group who gave their full attention.

Richard: And Festing's argument is the brain is amazing at generating counter arguments to maintain its existing point of view. But if some of that ability is taken up, if you're doing another task at the same time, if some of that ability is taken up, and therefore, you become a little bit more persuadable. So Festing comes up with this very counterintuitive idea, which is if you want to challenge rejectors and get them to change their opinion, actually it's moments of distraction that you should be focused on.

Richard: So from a brand perspective, if you wanna win over, reject consumers who reject your brand, don't reach 'em the cinema when they're giving you their full attention. Reach them on the radio when they're driving or doing the housework. It leads to a very counter intuitive media choice in that particular respect.

Ryan: That's interesting. So let's talk about that inside of business. Here, a lot of leadership rhetoric that is like repeat, repeat, repeat, jam these messages down people's throat, have two or three messages. The CEO's got like chief repetition officer and their title, right? 

Richard: Yeah. 

Ryan: Um, so I'm leading somebody and I want them to stop being a procurement manager.

Ryan: Let the tools do the work so I can focus them on being in more strategic meetings. So bring me into that paradigm. How can we do that? Like, where would we just, where's someone distracted in a B2B context? Because I find it really fascinating and I agree with it. Like I, I hear, I'm somebody who like, my subconscious does most of the work for me.

Ryan: So my best ideas come when I'm not working so I'm with you on that because it allows me to think and process. But yeah let's just unpack that a bit. 

Richard: So I think there were two ways of using that insight. The first is I think the point around distraction is probably better used for consumers.

Ryan: The 

Richard: point around confirmation bias though, I think there's a different set of tactics you could use, to get around it. So, you know, remember we said, the best thing is argument. Essentially, we interpret a message through this lens of our, you know, based on our opinion of the communicator.

Richard: Well, if you accept that it's not what said, but who's saying it, that can be much more easily applied to B2B. It means. If you have a fractious relationship with the head of IT, you could tell him exactly the same logical argument and it would fall on deaf ears. But if you can persuade someone who's much more friendly with the head of IT to be the communicator for that argument, well, then you'd have a very different impact.

Richard: And that principle has been tested repeatedly. So the original ideas were tested by Hovland and Weiss back in 1951. And what they did was find a controversial question of the day. So one of the questions was, do you think America can build a nuclear powered submarine in the next 12 months? And people would either say yes or no.

Richard: They then invite those people back to the lab in four days time. And when their participants arrived at the lab, there was a page of A4 paper waiting for them. And on that, there was a very tightly, cogently argued question. Opinion about why the past system was completely wrong. So if I said yes, a submarine could be built, it'd be a very powerful argument about why that just wasn't realistic.

Richard: They then asked everyone if they'd changed their opinion. But the twist was sometimes the argument came from a credible source, Robert Oppenheim, the physicist. Sometimes, in this case, it came from, or it would come from a credible source, like Frafter, the Russian newspaper. And what the psychologists found was if the argument came from a credible source.

Richard: 23 percent of people changed their mind, low credibility source it was 7%. So you've got this three, what, three and a half fold swing in persuasiveness of a piece of message. Crucially, exactly the same message, all that's changing is the supposed author. So their argument is, it's not what you say, often it's more important about who says it.

Richard: So thinking far more carefully about, well, who would be the right messenger for which particular audience, that I think would be the B2B angle. 

Ryan: I think it's spot on. I see so often a leader bring in somebody from outside to inspire their team, to give the same damn message they gave.

Ryan: And it cuts through probably in the same three to one ratio that the study found. And so it's a really good practice. And I think, you know, like thinking of my audience insights, marketing people, it's a community of. really like minded intellectual people and so I encourage y'all lean into that because bringing in Richard to help your marketer better think of behavioral sciences.

Ryan: If you've got a CMO who just wants to put all the features in, we'll help that person see it, maybe better than you can even explain them. And so that's really, I like that. 

Richard: You mentioned that example because later work suggests there are three types of messages that tend to be effective, and it's hard to get one that embodies all three qualities, but the three are credibility.

Richard: So Robert Oppenheimer is credible in that he obviously has this huge physics knowledge. Neutral, so that's your point about If the CEO wants to say something and if he argues it directly, everyone will think, well, he would say that, wouldn't they, or she would say that, wouldn't she, it's in her financial interest to say it.

Richard: If you get someone external who looks like they haven't got any skin in the game, suddenly they become a bit more believable. And then the third, which is a slightly different angle, is relatability. So we're most influenced by people who we see are similar to ourselves. So if you're trying to gauge how effective a messaging will be, think about, yeah, neutrality, credibility, or relatability.

Ryan: It's gold. The other thing and I think for everybody listening, you didn't get into insights by accident necessarily, because you're a curious person. But the point you made a while ago, I just wanted to comment on it about, I'll say it in my own words, but essentially this projection of, I'm in this situation, I feel and act one way, and therefore I expect you to act that way.

Ryan: And I'll just be vulnerable with everybody I've gotten burnt by that bias before. Like, Oh, I can't believe when so and so was in that corner, they behaved that way. And I would have never done that. Well, they're not you. Right. And so I think it behooves you to pay more attention to understand your colleagues and their biases so that you, cause you shouldn't assume blindly that they're going to agree with you.

Ryan: Um, and I think a lot of us do it. I've fallen victim to this more times than I care to admit. 

Richard: Uh, absolutely. Oh, it would certainly be a victim. So we had the. Brexit vote, I think it was in 2016 where Britain had a referendum on whether to leave the European Union or not. I was a voter for remain. And generally, the key drivers, if you were gonna guess if there was a vote, would be educational level and age.

Richard: So most of the people I knew were all voting remain. I kind of blindly thought, well, this is gonna be, you know, an easy victory. Now, I'm sure the polls are, uh, wrong. And then, obviously, when the election results come in, it squeaks the other way around. So, yeah, I've completely, even knowing about these biases, still fall victim to them myself.

Ryan: Yeah the Brexit election was not too dissimilar in nature to what happened in the first Trump election, right? I mean, I think in that case, and it's happening here in America now, you get a few beers in someone and then they go, I'm embarrassed to say this, but And, you know, that's why I think it like, I know we've talked about this, like measuring what people actually do and why they do it.

Ryan: needs to be balanced with what they say because sometimes they're going to lie to you. And this is a good example of that. And you're like, in the Brexit case, it was maybe less polarizing because of the geographic distribution of where the votes came from. But at least here in the United States at that point, it's a bit of a taboo thing.

Ryan: And, it was like, oh sh*t, like people did vote for him. You know, it was, it was really interesting again. 

Richard: It's a real problem in the UK for the advertising industry because they're not reflective or we're not reflective of the country. A media owner called Reach did a, who owns some big newspapers, they did a survey after Brexit about who people vote for. 97 percent of the people they surveyed in ad agencies voted remain. So 97 percent remain, 3 percent leave. But of the actual election result, I think it was 52 percent leave, 48 percent remain. Absolutely out of kilter. So to me, the false consensus effect is also a big argument for diversity.

Richard: You, you can't just expect people to Be empathetic completely. You need to get this variety of experiences and backgrounds, world views in a group, because otherwise you have massive blind spots. And we're often blind spots. 

Ryan: We are, I mean, I had a, it's a pretty famous internal example, but we had a campaign that we were evaluating for one of our customers and.

Ryan: It was not an ad that resonated with the population. And the CEO of the company was pissed. Like, I love this idea. I don't understand why this didn't go well. And their insights person was like, well, why do you think that? And he was like I tested it at a cocktail party with my friends. And, and the point was like, yeah, but you don't sell your product to those people, unless this is a product exclusively for them.

Ryan: Don't be surprised when Joe Schmo down the street doesn't like it, uh, you know, I talk a lot about the consumer being in the room, why I do what I do, because it levels the playing field to get, if you haven't, it assumes you're talking to a representative source of people, but it actually says, Oh no, this is if you're trying to sell to these people and it doesn't resonate with them.

Ryan: You might want to change it up, you know? Yeah, absolutely, absolutely. The other thing, I don't know if you've ever read this article, but Harvard Business Review did this, and I want to say this is like 10 years ago, but they wrote this op ed and it was essentially saying, use water cooler gossip to your advantage.

Ryan: And it was the point of when someone, it got me thinking when you were talking, like someone might be more susceptible to change their mind when they're not paying attention. And then if you think about communities and groups of people, there are official leaders. And there are, I call them locker room leaders, people that, when everybody's like, when everything's a mess, they call and say, what do you think?

Ryan: And I read that article probably eight, nine years ago, and I have really changed my behavior from worrying about office politics and gossip to embracing it. Because if you know who the key people are and you know that that happens. Well, then they can help you can say, Hey, this is what I'm working on.

Ryan: And when they're talking to people, it comes out. 

Richard: Yeah. I think that's a great point. The other difference is that I like to think about this whole point of confirmation bias and how you get around it. The other one that sparked, and this is a different angle, but I think solves a similar problem is there is a wonderful psychologist, I think it's the university of Bath, Robert Heath.

Richard: And he wrote a book called seducing the subconscious. And he talks about the Festinger experiment and how people are more open to changing these deep seated beliefs when they're distracted. But he takes a very different angle about harnessing it. And he uses the example of the British Airways ads in the 1990s.

Richard: It's quite an old book. This is a time when British Airways had a poor reputation for quality. And they changed that not by going out and saying, Oh, we've now got amazingly luxurious seats and lots of legroom so what they did was communicate far more obliquely.

Richard: So every single ad had the Deleuze flower duet from the opera Latin. You know, it's a hauntingly beautiful bit of music. And Heath's argument is, look, classical music has all these connotations, so it kind of, obliquely conveys quality, but they never directly say it. And because you never directly say it, you then don't counter, sorry, you then don't generate these counterarguments.

Richard: Now that principle of the body language of an advert being very important for rejecters getting around confirmation bias, I think you could probably apply that in B2B settings, in internal settings, because what you could say is that it's not just the logical argument you make in your deck, it's how slick the graphics are, it's whether you come across as likable, it's all those seemingly frivolous elements of the presentation that are actually effective because you're communicating indirectly and that doesn't suffer from confirmation bias so much.

Ryan: It's so true. And I forget the exact principle that you talked about in choice factory, but this last point that you made, it shows up when you know something through and through, I mean, anybody listening, you ever sit there and you deliver a presentation and it's a topic, you know, left, right, center, always, and you leave and you feel misunderstood.

Ryan: And it's like, I forget the name of the bias that you talked about, but it was essentially the curse of knowing too much. 

Richard: Okay. 

Ryan: Yeah, yeah, yeah. Um, And I think if you're really close to something, you have to be really conscious of the fact that just because you know everything doesn't mean they do. And it's almost like tailor back what you say and be mindful of your graphics and the credibility and the way it comes across.

Ryan: I, it resonates with me because I've been guilty of it. Like I'm, I understand, like in my case, I understand my customers so much. Why doesn't everybody do it? You know? And so it's, that's a limiting factor if that's your mindset. 

Richard: Yeah, absolutely. No, I 

Ryan: All right. So I want to transition a little bit in our last few minutes together.

Ryan: We're seeing advertising, music, videos, content, all get created quickly using language models, imagery and this is going to keep continuing. I mean, if you're inside a big business, there's a cost takeout. There's an efficiency associated with producing things. Consumers. I'm one of them.

Ryan: You're one of them. And I've seen this come through with some of our work. They're bullsh*t meters high now. I was listening to something today and I went and triangulated it with three different sources. Cause I didn't believe it. So brands are going to want to focus on the productivity gains of creating with AI at a time when human beings are more skeptical to let things into their home than ever.

Ryan: So if you're, what advice do you have for a brand to balance the productivity benefit with. the wonderful opportunity to be distinctive, unique, relevant in a community today. 

Richard: Yeah. So I think you can attack that problem from both ends. So there's firstly, how do you maximize the productivity gains from AI?

Richard: Because psychologists would say, it's not as simple as just increasing the effectiveness of the algorithm. So there is a brilliant 2022 study by Fabrizio Delacroix, I think it's a Harvard business school. He recruits. 180 recruitment consultants and gets each one to look at 44 CVs and they have to rate these CVs on their mathematical ability.

Richard: So it's a quantifiable thing that he can objectively score and the information they need is slightly hidden. Now some people get an algorithm that he refers to as bad AI. So it's 75 percent accurate. Some get an algorithm that has been designed to be 85 percent accurate, which he refers to as good AI.

Richard: And logically, you would think that the people that have the better algorithm would get better results. That is not what happens. The people with the supposedly better algorithm get worse results. And Delacroix's argument for this is when AI crosses a certain threshold, when it becomes more and more useful, there is a danger of, in his words, people falling asleep at the wheel.

Richard: There is a danger of thinking, Well, I'm not needed here. My expertise is needed. I'm just going to cut and paste the answers into the spreadsheet. So he's arguing when you are judged a model. Don't just think about improving the model in isolation. You've got to judge it as how do I make the AI plus human combination as strong as possible and do everything you can to remind people that their discretion, their intelligence, their judgment still has a role.

Richard: And they cannot abrogate responsibility. So I think there are psychological principles that can definitely apply in that productivity, uh, part of the equation. 

Ryan: Yeah, it makes sense. And it also, it's how I think of it. Like I, I use it to enable me a lot, but it's me and the AI. I'm never copying and pasting stuff.

Ryan: But I think it's interesting to transition to brands. I have a high bullsh*t filter, personally. Like, I don't want to be written for. I want to speak for myself. You know, like, it's a really interesting, um, intention. Okay, so we're using machines and people, agreed. How do we balance the authenticity that consumers, I believe, expect now more than ever?

Richard: Yeah, I don't know, I think, I think there is a danger, if you just use Chat GPT in quite a formulaic way, you come up with formulaic answers. So I think there's a couple of points you could make there. There's a wonderful book by Ethan Mollick called Cointelligence and in it, he talks about in your prompts, don't just ask a question, tell the AI to adopt a persona.

Richard: So maybe telling it to, well, you know, as if you were an amazingly creative advertising exec. What answer do you come up with or if you're a behavioral scientist, I think trying to ensure that the prompts aren't that you move from a generic question to a prompt that will closely reflect what you need that differentiates it makes it all individual.

Richard: And then I think the second interesting area would be if you use a standard model that's been trained in the same information as everyone else, you can get standardized answers. It's a pretty awesome shape. If you train it on your own discrete body of knowledge, maybe, a corpus of 25 behavioral science books, that's going to lead to a different output.

Richard: So I think you can start making some decisions that will mean you're less generic, less bland through those two tactics. 

Ryan: I agree with you. I mean, so, so there's a system called Jasper.AI and our marketing organization uses it. And I just told you, I don't like to be written for, but I actually do get written for a lot for two reasons.

Ryan: There's a woman named Katie Sweet who helps produce this show. I trust her intimately. We've worked together forever. One thing. She's very busy. She's in high demand. So we've taught the Jasper system, my tone, conversational, non corporate, direct, sometimes swears, etc. So with those, and I was trying to find it while we were talking, but with those six or seven prompts, She can very quickly produce something that I'm not going to have 20 edits for.

Ryan: And I think that's teaching your brand. And so that plus the fact that agents. I believe the new software and agents can be codified to understand your supply chain, your consumer data, what you know about your retail environment, what you know has previously worked in advertising. And if you can harness that, to me, that's why consumer data is a consumer data business, consumer insights departments are adopting AI faster than a lot of other parts of the organization because the data is useful. Augment what we already know with language models, so there's something really interesting in that. Yes. 

Richard: There is a as you were saying that you then get a conundrum of how open am I about something being generated by AI? Because there's an amazing set of studies by Andrea Morales, who's one of the kind of, I think one of the universities, Southern California, but 2005 study.

Richard: And she coins this phrase, the illusion of effort. So really simple study. Recruits a group of people, everyone's in the market for a house and she shows them 10 houses that meet their requirements. Twisting the experiment, some people are told that the Rialto generated the list quickly, it took an hour and they used a computer.

Richard: Other groups are told the estate agent went to load, sorry, Rialto in American terms, went to loads of effort, 9 hours and did it manually. Now when the group rate the Rialto later on, the people who think the Rialto went to low effort, Rate them at 50 out of 100. The other group rated the Realtor at 68 out of 100.

Richard: So you've got this 36 percent improvement in quality rates. And again, it's people again, touching exactly the same output. But according to Morales, what they are faced with is a complex question. Now, working out whether a router is good or bad quality is a complex question. And when people are faced with complex questions, they tend to replace them almost without knowing it with simpler questions that give them an almost as good answer.

Richard: And the simple question is not, is this a good router? How much effort did they put in? So we use effort as a proxy for quality. Now the problem with AI is if it speeds things up, if you're working in an organization and you are churning out work faster and faster, unless you put in counterbalancing measures, people, even if there is no objective decline in your quality, they will judge it as worse.

Richard: So if you're going to speed up, make sure that everyone knows that there have been lots of hours putting into setting up the algorithm correctly, putting the processes in place. You've got to make sure that they know that effort has gone on. Even if it's not a kind of immediate point of response, 

Ryan: I love it because it's almost like, you know, I had this constant tension because I'm acutely aware of this because I, my company, sells to academic researchers.

Ryan: Yeah. And I remember when we first started doing Zappi, I got one in every two phone calls. There's no way this could be good research. You did it too quickly. And we had to really, I remember this is a nine year old thing now, but we had to really overcompensate by that. This is how we get data. And these are all the statistical models we use and we're crunching 10,000 data sets all at once.

Ryan: And it's those subtle things that make people say, Oh, got it. All right. This is legitimate. And it was legitimate, but the point wasn't that it was, this is too good to be true. And therefore it was not good quality. So, this has been really fun. You're, I think, one of the only podcast guests I've had where I leave with more ideas than I gave.

Ryan: So thank you. I noticed this when you were at our conference. I had like six pages of notes. Cause like when you talk about studies and behavioral science theories, you can immediately apply it to your world and your business. And I don't think we study science enough to and so I just want to say thank you for the work you're doing on behalf of everybody.

Ryan: Um, and really citing so many wonderful studies and bringing it to life and in such a digestible way. I couldn't recommend more Richard's book, The Choice Factory. You can get it anywhere books are sold. And Richard, if people want to get in touch with you to talk more, what's the best way to reach you?

Richard: On LinkedIn or Twitter. And if they send me a DM, I'll, I'll get back to them. 

Ryan: Perfect. Richard, thank you so much for taking the time. This was really fun. 

Richard: Fantastic. Thank you very much. Good to see you. 

Ryan: And thanks for listening, everybody.

Talk to us