The current state of the insights function headed into the new year
GET THE REPORTEpisode 51
Steve Phillips, Zappi Founder and CEO, and Brendon McLean, Chief Technology Officer at Zappi, join Ryan Barry live to talk all things AI, from the implications of AI for market research to the threats and opportunities it presents for the world of tech, productivity, creativity and more.
Ryan Barry:
Hi everybody and welcome to this episode of Inside Insights, a podcast powered by Zappi. My name is Ryan, and today I have the pleasure of being joined by Steve Phillips, our Founder and CEO, and Brendon McLean, our Founder and CTO.
We are going to be chatting with you today about generative AI and its impact on innovation, productivity, and the world of consumer insights. Guys, thanks for joining. How are you today?
Brendon McLean:
Very good, Ryan. How are you?
Ryan:
Good. It seems as though you've missed the black T-shirt memo, but we'll forgive you. Steve, how are you?
Steve Phillips:
I'm not bad, but I'm also not in a T-shirt. It's the black top memo. But at least we got the color right.
Ryan:
Standard Steve Phillips. Yeah.
Steve:
Exactly.
Ryan:
You're turtleneck, I'm T-shirt.
Steve:
I change in the summer.
Ryan:
Before we talk about AI, Steve and I are planning to go to Cannes. I think I said it in a way where all the Europeans listening aren't going to judge me. And Steve reminded me that I got invited to a white linen party. So if I'm not wearing a black T-shirt, I'm very happy to be in white linen. So thank you for the invite, Steve.
It's a timely conversation guys, and I'm excited to have it. And for those of you listening live, feel free to ask questions. Kelsey, who's our producer, is going to be mining the chat, and we'll get to some questions that you might have. But we've been thinking about this topic for a lot longer than it was vogue, and we thought we would take some time to share our perspective, some of the things we're doing internally, but also to give you some ideas that you can take to your team once you leave this meeting, or once this leaves your AirPods.
So I'm of the view that generative AI is one of the biggest transformations that's happened since the iPhone, since E-com became a thing. And I think you two guys would probably agree, particularly for our industry. But Brendon, from a technical perspective, you've been doing this a long time. Why is generative AI so transformative?
Brendon:
Well, I've been thinking about this quite a bit because there's a lot of fads that come around. And people always ask me, "Oh, is this the next thing, or should we be looking at this?" And this was the first time in a long time where I thought, "Okay, this is something big," and I'm specifically talking about ChatGPT. But I think when that was rapidly followed by GPT-4, the jump there was... For those that were playing around with it, the jump was huge. And so I think I've gone through all my whole hype cycle from this sort of beginning bit to, "Oh my god," to, "Is my job even safe?" To, "Okay, I'm calm now, but excited to see how we can use this."
The thing that I think for many of us in the technology space is just the incredible reduction in time to value. Because traditionally, tech companies would need to, if you wanted to use something of yours, all of us have our own data and we're all bringing a unique selling point to the market. And the question is, how do you combine AI with your unique selling point? What is the reason for your business to exist?
Now, there are people that have invested in big data science teams. We have invested in our own team. But to create something with the likes of... The power of something like Google or Facebook or any of the really big companies that spend a ton on AI, particularly machine learning and deep neural nets and those things, the fact that Netflix knows what you want to watch next and all that, that's a big investment. It takes time to train, it takes time to just get the data sets ready. There's a big process there.
And then suddenly, we're in a space where people are throwing stuff into prompts and getting value out the other side. I think what ChatGPT showed us was that there were a bunch of us business people, tech people going, "I wonder if I could just throw this document in here and ask it some questions," and then going, "I can't believe it succeeded." And so it's the time to value. I think that's been so incredible. And obviously the other spaces are creative industries.
So this is not just GPT and the like, but also Stable Diffusion, DALL-E, and that side there. I think Sam Altman mentioned this as well, that the big surprise was that many of us, and my staff included, thought creativity would be the last thing to fall. And it turns out that's not the case. That there are people saying, "Why does Jay-Z not release this thing? We've got AI releasing the next Jay-Z album and this is what he should be doing." So that's another thing that's just been so incredible, is just watching creativity happen.
Ryan:
I still think creativity and intuition is the last thing to go. It just forces actual creativity. And we've tested some stuff on Zappy where the generative AI can beat the norm. And that to me is a symptom of companies not knowing what they know. And as a result, not being bold. Almost more so than lack of creativity.
But it'd be interesting to follow up from GPT-3 to -4, I was one of the people who was like, "Oh, shit, this has improved a lot. Can you imagine what it's going to be like in a year? It's going to be mental." So Steve, you're somebody who's quite creative. I don't think this has disrupted you. I hope it never does, because I really value your ingenuity. But from a market research and or consumer insights industry perspective, why is this such a great enabler? And what are some of the things from your perspective that are going to go away, leaving room for other things to happen?
Steve:
So I think going back to what you said originally, which is we've been thinking about this for a while, I remember reading The Future of Professions, which I would highly recommend people to read, a book by Susskinds, father and son, a combination of professors thinking about how the world of AI, the world of automation of computer, will change professions. And they wrote the book in 2016, and it's very appropriate for today. It's very relevant for today. They're thinking about how it will change professions. They were just wrong on the timing, because frankly we went through almost another small AI winter between 2016 and 2022.
But their thinking about how it'll impact professions is the same as our thinking about how it impacted the research industry, which is hugely. I mean, every single element of it is going to change significantly. And going back to the developer side, it always used to be that people would say, "You have a 10x developer," and then... So you can have 10 developers, one of them could be a 10x developer. They're worth all the other nine put together.
And one of the things AI now, because it's so good at coding, could do, is make everyone a 10x developer. Well, I think what will happen now, whether it's from desk research, which is frankly just better done with ChatGPT than with a desk researcher, whether it's designing a questionnaire, I mean... Julio, our head of customer insights, just showed me ChatGPT designing a contract study. Yeah, I mean that's pretty staggering.
So our knowledge of methodology, our knowledge of the market, our ability to analyze data, our ability to chart data, our ability to tell a story associated with the data within a report, all of those things are going to be done significantly better by an AI either next week or in a few months time. So those elements of our job are gone. On the other hand, I think one of the massive advantages for our industry is that the ability to use data to make decisions will be 10x-ed.
I think there will be 10 times as much consumer insight work in five years time as there is now. Because you always want consumer insight when you're making a business decision. I mean, we're business people. We want to hear from our clients about which direction we should go all the time. And you'd be crazy not to want that input. Doesn't necessarily make the decision, but you'd always want that input. Well, suddenly, if you've got the ability to have brilliant insight at your fingertips at a 10th of the price now, and at a 10th the timescale of that, well, it'll be a huge increase in the volume of insight.
So then you have to say, "Okay, what is the human's role in that? What is the researcher's role?" And I think it is much less about the traditional values of that sort of desk research and the analysis and writing of a report. And much more a role around the curation of the data asset, the working out which AI to use in which situation, how to maximize the value of your data, how to connect the dots between different data streams, and how then to inspire people to make the right decision, because even if you show them straightforward data, they don't necessarily do the right thing. And so you are going to have data management and inspirational roles, which I think are two of the roles we have now, but they're two of the smaller roles, and I think they'll become sort of 90%. That would be my guess.
Ryan:
Yeah, it is crazy. So for those listening, you're probably not going to Quirks London, but Julio is going to get on stage, I think later today or maybe first thing tomorrow-
Steve:
Tomorrow.
Ryan:
... Using some plugins we've created where generative AI writes advanced methodologies with two clicks. I have to tell you something, I find that motivating as hell, and I think it's been holding us back as an industry for 20 years, where we put the methodology on a pedestal. We put the report on a pedestal. And the companies we all work for are trying to sell more potato chips. Right?
So if we're spending more of our energy helping them do that and less worrying about... As you know, I always like to make fun of question seven in the survey. I think that that's a better thing. What are some of the risks you see for the industry, Steve?
Steve:
Well, I'm not sure I see risk for the industry. I only say opportunities for the industry. Do I think it will employ the same number as people as it does now in five years time? Actually, maybe I... Probably yes. Probably yes. They'll be in different roles.
So I think it's not about a risk for the industry, because I think the industry's future has just maybe become much brighter. There's just going to be a lot more consumer insight. I have that slide that I've done, which is artificial intelligence leads to abundant insight, right? I mean, it does democratize, it demonetizes, it distributes. It's just brilliant, I think, for our industry.
I think the concern is what the individual is doing right now. And there's that Sam Altman quote, I think it's a Sam Altman quote, which is, "Your job won't be taken by an AI. It'll be taken by someone who knows what to do with an AI." And I think that's very true. I think if you are in the industry now, not up to speed with these things, not thinking about how it impacts your career and how you can take advantage of it, then I think you are in a difficult situation. If you are thinking and acting on all the things I just mentioned, then I think you'll be in a very good position to become significantly more important in helping major companies make major decisions.
Ryan:
Yeah, it makes sense. And I think it's about upskilling people. And one of the challenges I have for all of the CMOs or head of insights listening is a simple question, what are you doing to upskill your people? Because what I see, Steve, Brendon, on a day-to-day basis, is the average insights person is still being pushed to do, and then think in the evening when they've put their kids to bed. And what this enables is thinking in impact all day, not doing, which is great because I'd much rather play more golf, even though I suck at it. And everyone who's listening knows, me playing more golf just means I'm drinking a few more beers.
So Brendon and I are probably more skeptical than hype guys, I would say. I'll tell you why I'm convinced this is the future, particularly for this industry. It's because we play with data. And this can bring sense to data. This can bring empathy to data. This can bring perspective to data. So it's not as disruptive in all industries, but for me, it's like, "Wow, we better lean into this." If you can take a 30-year statistician and ask one question and spit it out, that's profound.
I'm going to ask you one more curveball, Steve and Brendon, that wasn't on our list. The industry's been suffering with data quality. The enabler of generative AI is what it can do with data. Elephant in the room, we have a data quality issue in the panel space because we've driven the price down. Bots have been a thing. And I've even seen some examples of generative AI looking and feeling like a cat lady from Chicago. How do you combat those risks long-term, guys? And Brendon, I'll start with you.
Brendon:
Well, that's obviously something we've thought quite a bit about. So our conclusion... But I mean, I don't know, is this a safe space to say this?
Ryan:
This is a safe space, Brendon.
Brendon:
... Yeah. Live-streaming.
Ryan:
Yeah. We're live-streaming. I'm sure there's seven people listening. A few of them are texting to give me shit right now.
Brendon:
So our conclusion may evolve. But it was that at least from... If we're analyzing the data, coming in response to open-end questions, we can't actually tell. Now, there has been a bit of evolution past that point when it comes to being able to detect snippets from open AI. I haven't gone deep on that, and I'm skeptical about how difficult that would be to subvert. In other words, AI detection. Because it really is good. I think the one way you can tell it's GPT is it won't say anything super definitive without a little caveat at the end. I don't know if you've seen that.
Ryan:
"I really like this cat food, but I'm generated by a machine."
Brendon:
Yeah. I mean, even if you ask it something that most people would just agree is rubbish, it will say, "Most people have... but however, one should be careful to..." All that stuff. And that's obviously open AI taking its role in the safety space very seriously. So maybe you could catch it on that.
But I think for us, the angle is actually behavioral analytics, is how does that text get in there? What does the browser look like? What is the actual timing between the mouse event, the keystrokes, and all that? Because let's just say, I'm glad we're not storing bitcoins, because the incentive for answering lots of surveys is not huge. We don't have a vault of $7 billion waiting to be hacked. So we're going to be a bit lowered down on the targets. And so obviously now, GPT-4 has made it very easy to construct answers in response to questions, and actually create a coherent response all the way through.
But then the rest of it, making a browser look like it's human, making mistakes, pressing backspace, because nobody ever types in a bunch of things without making a mistake. All those things are still stuff we can check for. So that is the angle, but I think there's going to be lots of innovation in the space from panels, communities, things like that, ways of verifying this stuff. Steve, I don't know if you've got other ideas there.
Steve:
Yeah. I think it's a major industry issue, and actually the industry associations are getting together and we are heavily involved in some of the work, the MRS, the Insight Association in the States, and some of the other people like SMR are doing. So we are getting very engaged in that discussion and that debate. I think there are two things, one that we can help with specifically, which is the type of work Brendon is talking about, which is AI discovery. I think the more important area, in my view, is the panel composition. And frankly, I think we have to pay more. And that money doesn't necessarily have to go on additional incentives. It probably does to a certain extent. But it has to go on things like maybe a telephone call or a Zoom meeting with the panelists once a year. It has to go on quality measures.
And it has to be across multiple panels which are all coming through different exchanges. I think we are facing the same thing that the marketing industry faced with programmatic.
Ryan:
Yeah, true.
Steve:
And suddenly they all looked at it... CMO and P&G got together and said, "Hey, guys, I'm not spending anymore on programmatic digital until you sort the stuff out." And I think we are at that moment. I think this is an existential risk for us. The good thing is I think a lot of smart people are thinking about it now. Yeah. And we're helping in some of those initiatives. But the panel companies that we work with are getting together across... As collaborators who care about this industry to solve it at the root cause, and the root of the problem.
Ryan:
Yeah, I'm with you. I think paying more, going back to getting real people opted in and verified... Novel concept, by the way. And then treating people like people would be useful. So let's assume we do that. All of us have to put some advanced AI detection in place to cover the back side.
The one thing I would add to what you're saying is an opportunity. We begin to know more and more about people, whether it's Steve, or the lookalike model of Steve. And so we can actually combat this with an additional thing that I'd add to the discussion which is, only ask people the questions we need to. Why do we continuously take people through the same shit? We already know who they are, where they shop, what they buy. And in many cases, we probably can start to infer what people like them think about the same damn generic cat food concept over and over again.
And so we can go ask the two questions we need. And I think if we can attack it from all three points, we can make sure the foundation is robust, because generative AI is fantastic. But if the data is shit, the insight will be shit. And primary market research, the data comes from people and their behaviors and their attitudes.
Steve:
Yeah. And I would add to that. I mean, if anyone out there is doing surveys that are more than 10 minutes in length, we know the quality goes downhill. We just know. So you shouldn't be doing it.
Ryan:
But if anybody out there is doing that, please stop. What are you doing? You're ruining it for everybody else. Anyways. So guys, I want to talk about some of the things we are doing, and this isn't about boasting Zappi's capability, but I want this to orient people with ways to think about this, and it will back into some tips for people.
So Brendon, you chair our AI Everywhere Council. We're part of that council within the Sumeru Portfolio Company Group as well. What does that mean? How do you consider your role as the chair, and what are some of the things we're looking at from internal, external, product, go-to-market, so on and so forth? Just take us through the AI Everywhere village and what that looks like.
Brendon:
Okay. So this was Steve's idea.
Ryan:
Good idea, Steve.
Steve:
I get the blame.
Brendon:
No. What we were trying to do there is, Steve was concerned that we wouldn't necessarily think about internal transformation with the same rigor that we would in terms of developing our product. And I think he's right there. We definitely weren't thinking about how everyone else... How can we assist our colleagues' and customers' success, operations, account management, et cetera?
So we immediately started having a lot of product ideas, and I can talk about those. I sort of think of them slightly differently. The AI Everywhere Council is sort of focused on AI inside a little bit more in my head, just because there's so many people on the product side looking at it from what we present to our customers' side. But I think we'll get there, right?
Ryan:
Absolutely.
Brendon:
Okay. So maybe I'll start with that, because that is very interesting to people in this industry. What we are doing there, there's quite a lot of low hanging fruit, but there's also some big ideas. So on the low hanging fruit side, the ability to summarize data and actually give people something more than a chart, we always have suffered between, what can a consultant give somebody, and how far can you go with automation?
And it really starts with a simple thing of just looking at what our users do with our charts. Our charts, we may think they give you a decent enough guide on what business decision to take, but we don't have in the headline, "Gummy fruits, raspberry is a clear leader in the pack, and we therefore recommend you proceed with this idea." We just say, "This is the overview chart of how they compare."
We've actually got that into production, so now we can bring in that context and actually provide some... It doesn't look special. I think this is the thing. For a human to do this chart, you'd be like, "Okay that's good." But when the engineers look at it, they go, "Okay, well, there's so many different varieties of how you would construct this sentence."
But then the big idea is, how far can you actually go? Can you get a huge report? Not huge. The right size report? And that actually brings another aspect to it, which is, GPT-4's remarkable ability to summarize the important stuff. So you can give it 15 pages and ask it for one. And it'll pull out generally something which you would say, "Well, that is the main thrust of this stuff. What it threw away was the right amount of stuff to throw away."
So this is an area where we are experimenting where we are really just trying to see, how close can we get to the report that you would pay thousands of dollars to a consultant to create, that includes the charts and includes the introduction, headline, summary, paragraphs, the conclusions, everything? So that's a very, very exciting part. And it's something we're actively working on now.
Ryan:
Brendon's... He's naturally humble. So I'll pump his tires a little bit. What he just said, he took an afternoon off and went to a coffee shop and replicated something that is more robust than I think a lot of the outputs and deliverables I've seen go to CMOs, which is insane, in an afternoon. So you can imagine what six months of work can do in terms of capability.
Another example, earnings reports. You can upload an earnings report to any company and say, "Spit out the top two things that matter," and it can do that really succinctly. Just think of how much information is amassed in that deep technical report and you can codify it down.
Brendon:
I think that a very key thing is... Maybe to extend what you were saying to Steve earlier around having hundreds of insights, it's actually the ability to drop it down to, "What am I actually looking for?" That makes everything so accessible. Because being overloaded with information is almost as bad as having no information. I think this is a big thing.
Steve:
It's the old classic researcher client quote of the researcher delivering a 50-page report to a client and the client saying, "Oh, can you just make it a three-page report?" And the person saying, "No, I didn't have the time."
Ryan:
Isn't that a Mark Twain quote? "I would've written less, but I didn't have the time." I love that.
Steve:
Yeah. Yeah. Yeah.
Brendon:
It is.
Ryan:
It is. I was just going to say Steve, people, us, all of us, naturally drift to complexity, and so this helps keep us more succinct. I mean, we naturally want to make things... I was thinking about this this weekend. I had a really productive internal workshop two weeks ago. We left with a beautifully succinct one-page canvas.
And then it turned into a 20-page Notion guide, and I was like, "What the fuck just happened here?" But it's a natural human instinct to be like, "Well, let me explain all the permutations of this." And it's almost like, "Can you put that Notion back in the Notion page into... Oh, by the way Notion has a ChatGPT plugin. Can you just summarize this for people who don't have time to read all 70,000 pages? That'd be great."
And I've seen this happen. I'll share a quick story. So we had our offsite... Our leadership team met in Cape Town a few weeks ago. And Brendon and I had this wonderful idea: we'll fly everybody to Cape Town during the summer and we'll sit in the sun. Turns out, we sat in the rain, but what are you going to do?
But we went on with this exercise. We were supposed to come back with ideas for an evolved value proposition, and we spent two hours on, I think, the important work. What are the problems? What are the frictions that our customers have? Basically doing deep discovery of customer jobs to be done. And Scott, who runs our rev ops team, said, "Oh, shit, we're out of time. We have to go back into the main group." And he goes, "I know what I'll do. I'll just upload all of this and say, 'Can you write a value proposition?'"
And so he did that. And then he said, "Can you make it a little bit less British? Can you make it a little more concise? Can you make it a little bit more punchy?" As a company, we speak US English and we're pretty direct. And so it got to a degree of tone that I was blown away. And it was cool, because we spent our time in the meeting talking about the deep consumer understanding, and then had a basis point to jump off of from where we came through, which was pretty cool.
So what else within research software are you looking at? So we talked about reports. We've talked about some internal productivity. What are some other areas, Brendon, that we're looking at?
Brendon:
Well, there's also areas we've thought of that we're dabbling in. We were looking to see, is there a degree of simulative respondent that you could play with? I think the jury's out on that. At least, there's certain stuff which I think you might want to offer at a very low price point, or free, and just as a sort of early stage panel, just to almost get the ideas flowing. So one of the things our data science team did was they created a virtual panel.
And they created multiple personalities out of real sort of aggregates in our database, who this person is, what their job is, where they live, those types of things. And then pitched concepts to them in quite a sort of focus group way. So though we don't do any focus group stuff, well, maybe you could just be a side thing that we could add in with some extra value.
And when they first looked at it, they were blown away by it. But I think when we start digging a bit deeper, you think, "Okay, it is having to make stuff up here, because the product doesn't exist. The respondents don't exist. What are we doing?" But what it does do is, it gets you thinking because it's almost like having... It's like going to therapy.
So you've got a couple of ideas and you've got this AI asking you various questions and telling you what they like and how they can work together to create a sort of composite product that meets all these needs. And this is not something you would want to make a final decision on, but I think something to use in private and then claim all credit yourself, potentially.
Ryan:
There is a really weird credibility thing. I gave a speech in Chicago a couple weeks ago. And I was like, "Let me see if I can get the audience to lean in." So I just said, "What do CMOs want from insights people?" And it wrote a list of bullets. And I've never seen more people lean in. Take pictures and shit. I could have been the industry's czar and they wouldn't have leaned in anywhere near as close to like, "Well, the robots said it's true, and this is legit." And I can't remember how many people took a photo of that fucking slide. But it was great.
But I think some of them, whether it's pre-respondent vetting of ideas to inspire thinking... Or I mean, we did a hackathon where we're doing a lot more with conversational surveys. That helps the data quality problem. I think knowing what people do and think, how they respond, can actually have a much more meaningful impact on a dialogue we can have, which excites me.
So Brendon, think for a sec about internal tooling and productivity. So we've talked a bit about some of the stuff that we're innovating on. But what are some of the areas I think, across the business, that we're doing in terms of giving our employees an advantage of getting access to tools, but also helping them do their jobs better?
Brendon:
Yeah. Well, I would be remiss of a CTO to not mention GitHub Copilot.
Ryan:
For the people who aren't technical, quick explanation.
Brendon:
Yeah. I'll go there.
Ryan:
Thank you.
Brendon:
A quick explanation. So this was a big surprise to me as well, as the creativity thing is, GPT, GPT-4, -3, all of them, they're all language models. And in retrospect, it isn't surprising. But what we do as engineers who tell computers what to do is a language. It's not the same as English or French or whatever. It's much more structured and less forgiving. But it is a language.
And as a result, there's various things. I think Codex was the first open AI thing and it could generate some snippets. ChatGPT can do pretty well. But for those of us that have been playing with GPT-4, it can write whole programs pretty well, generally under 500... 1000 to 500 lines. So there's got this across the business now, and what basically... What Copilot does is it plugs into the tool we use to write code, and it just suggests the rest. But it can use code comments, which are really just for other programmers, what we would write so that the future self knows, "What was I thinking here?"
So usually you write, "Well, I'm going to connect to this panel and pull down some stuff in order to do this." And then you write that so that you know why this next bit exists. And now it just sort of pops up in gray and says, "Maybe you'd like to write this function and have this code." And it is right, quite often. There are quite a few skeptics out there. But some of us have gone to GPT-4 and the jump between Copilot and GPT-4 is incredible.
And then of course, for anyone that hasn't watched Microsoft's demo of where they're taking this whole... How far they're going to take Chat in their products, the whole Office suite, the entire Microsoft offering really, you get an idea of where you can go with internal productivity.
Ryan:
Yeah. You have to watch the video, everybody. I'll put it in the link below.
Brendon:
Watch that video.
Ryan:
If you haven't watched it yet, welcome to the world today. Please watch that video. It's a good investment in your future. 36 minutes.
Brendon:
Yeah, I would say that that video, there are three tech demos of all time and the first one was in 1958, the second one was Steve Jobs and the iPhone, and then this one is of that caliber. I think the one criticism you can have is, okay, well, nobody's actually got it yet.
So how much of that was scripted, and how many takes did they have to do to do that? But I think for those of us internally, to be able to pull stuff from Salesforce, pull stuff from the platform, create presentations, get tables that support the narrative, go from PowerPoint to email, just being able to transform the context of what you're doing. And this is what we're doing in marketing now as well, is multi-channel messaging. To be able to go from a LinkedIn post to various other things. You can turn that into a Tweet, you can turn that into an email. You could potentially do account-based tweaks to those things. And previously, we'd have to type all that in, and now a lot of that can be automated.
Ryan:
Brendon, I have a question for you. So you said you've gone through your hype continuum from hype, nerves, excitement. I ask you this question because as an engineer, you have a technical set of skills, which many of our insights colleagues listening came up with, a technical set of skills. Why, knowing what Copilot can do in terms of writing code, QA-ing code, and giving you alternative paths, do you feel comfortable, as somebody who makes his living engineering things?
And I'm asking you the question because I think it will relate to why our insights colleagues should see this as an opportunity. And I don't ask you to bullshit, but what are some of the reasons you're like, "Oh, this is great?"
Brendon:
So I have a slightly different... A slight nuance to Steve's 10x engineer comment that everyone can become one. Because what I've noticed is that it makes small little mistakes. And you can wield AI more effectively the more you know. And so I think what I am seeing early signs of is that you may turn someone, a semi-engaged careerist engineer from a 1x to a 10x. But the 10x becomes 100x.
Because when it fails or when the pieces it's generating don't fit nicely together, you know what to do, and you know what good looks like. So you can keep interrogating it and sort of mold it. I suppose the difference between... If you're a sculptor but you've lost your hands and you're getting someone else to do it, knowing what good looks like, is still important. Having taste. And I think that's still a thing across all industries, having taste is important, or having the ability to appreciate what good looks like is important.
So that's one thing. But then I think for other companies as well, Elliot, who's one of our PhDs, he and I both took Easter off and we went and did our own... Sitting by the keyboard trying to prove one or two experiments. We both came back saying, "Okay, we're on the other side of the hype code. It's still amazing. But you do need the data, so we still have a future."
So that was the other thing, is... I mentioned that this technology dramatically reduces time-to-market. And we've seen some incredible things on the web where people have created amazing things in extremely short periods of time, but there's no way for them to defend themselves against someone else that does exactly the same thing. So everybody's just got a lot more competent and the ability to create stuff in a fraction of the time. So you're still stuck with that age-old business problem of, "Well, why you?" And I think for people that have unique data, they can sleep well at night.
Ryan:
It's an interesting point, right, guys? Because we're talking about this as a great revolution in technology. But in many ways, it's soon to become a commodity, which means you have to be able to innovate on top of a thing that is... Transform technology, but everyone has access to it. For 20 bucks a month, you have access to it. And I think your point resonates with me about the context you have.
So I chair our diversity committee, and I've been doing discovery for four months, and I said, "Let me try to write..." And for those of you who know me personally, I suck at writing. So I was like, "Let me put all my discovery notes in and see what it spits out." And it got me 80% of the way there. Now, that 80% was an entire day sitting in my office, stumbling over grammatical errors. But that extra 20% was me, with our context, our nuance, our people bringing in. And I think that's important.
So Steve, one of the things we want to do is help the corporate community that doesn't sell software but sells cheeseburgers or toilet paper or whiskey or whatever the hell they sell, get out in front of this. I've heard from a few progressive chief insights officers that they're, similar to Brendon, chairing an internal committee. What are some pieces of advice you have for the CMO listening or the head of insights listening of what they can do to start to get their organization ready for this? And if they're not ready, obviously we already know they're behind. But what's some advice you have or some specific actions that people can take?
Steve:
So I'd say two core ones. The first one is you start thinking about your data. So I'm just going to what Brendon said. At the end of this, to have any sort of competitive advantage, you have to manage your data carefully. And you have to think about your broad data strategy. And frankly, CMOs have been thinking about this for years, but this has become way more intense and way more important right now than it's ever been before. So I would say that's the first thing, is have a very clear aligned data strategy.
And that includes data integration and it has data collection and none of your data can sit in PowerPoint. It has to sit in a database and you have to manage the data very carefully. So that's the first thing.
Then I would say certainly at this stage, I don't think you have to make necessarily as a CMO of a large CPG company for instance, a massive change in your direction yet. I think you need to make your people understand what's going on and get intrigued and engaged. And I think one of the things that we've done with the AI Everywhere is get everyone, we had that exercise with the leadership team, where everyone had to come and say, "This is how my job's going to change. I'm going to do some homework. I'm going to look at AI in my space, and start thinking about what's happening."
So just making everyone just go... "This is revolutionary. There's no question about it. Now everyone in my organization, I want you to think about how it'll change your job." And simply doing that will get people engaged and looking at it and reading around the subject. And over the next three months, six months, 12 months, 24 months, you will then be able to develop your strategies, and people will be pushing for them because they're engaged and they're thinking about it.
But if you try to ignore it or tell them what to do, I think it's too early. I think you just have to get them using it. So if you've got people in your organization not using ChatGPT and not reading about how it's being used and other similar AIs within their space, then you've got a problem. So if you get the people engaged and you start working on your data strategy, I think those are the two key things right now.
Ryan:
And it's a simple thing. I mean, just for some inside baseball, we are doing this, and all Brendon did was send an all company email saying, "To buy ChatGPT and expense it, and here's some InfoSec requirements you have to be mindful of so that we don't replicate Samsung." Sorry Samsung if you are listening. Love you. But a little bit of safety is really all you sort of need. And I think we've seen pretty serious adoption since, as evidenced by the expense bills coming in.
So Steve did this exercise. He asked everybody, whether you run marketing, engineering product, what it means. And it was fascinating to see how it impacts sales as much as it impacts all of the stuff you've seen Brendon talk about, which is fascinating.
There's another thing, Steve. I mean I think if I look at a company like ours, we spend a lot of money on external technology. You could probably 50x that if you look at somebody like PepsiCo as an example. And so I think another thing to do is actually put it to your suppliers. What are they bringing to the table that will help you make more sense of your data, your community, your customer base? And a lot of the tools now are launching things. We use Notion for internal knowledge management. And there's ChatGP plugins. You can just literally say, "Make this smarter, link it to this page, summarize the two." There's a lot of things you can do. So I would look at your stack holistically.
And I think Steve's point is just get it in the air. Get people playing with it. I think, Steve, one of the things I think we should talk about a little bit is, getting it in the air helps them understand AI. But what are we doing to help them develop to the job we need them to do? I think that you and I talked to a lot of heads of insights and there's a big from-to that needs to happen between the job somebody's doing and the job that their boss wants them to be doing.
Do you have any thoughts about that? On some of the upskilling and the skills we need to be developing in people beyond the familiarity with the tooling?
Steve:
Not a lot of detail yet. I mean, let's be fair, it's really early, right? ChatGPT was launched in December, so we're talking four months into this. What we know is that without question, it's revolutionary. Without question it's going to change things. What we don't know exactly how... I mean, we've got some hypotheses and I've talked about how I think it'll change client organizations. But at the moment, I think the most important thing is just getting used to it, and start imagining how it would change your role.
I mean, if you're a large organization, you will have a lot of people doing basic sort of desk analysis, right? Desk research. Well, that should now go. So how are you going to repurpose those people? But the best way of doing that is to get them doing their desk research on GPT, the strengths and weaknesses, how they could and then say, "Okay, what can we do?" Maybe there's just more demand for more desk research. Suddenly instead of taking two weeks, it takes two hours, and so demand spikes. So you need to be doing those experiments with the tooling that is available, and see how things pan out over the next six to 12 months. We believe in experiments. I just don't think anyone has the answer. I think it is a series of trial and error experiments to massively improve what we're doing now and see how that changes the future.
Ryan:
And Moore's Law is going to never be more true than it is here. And what we saw in the last six weeks in terms of development is nothing short of exponential. So I think buckle up is probably a piece of advice as well. I'll tell you all a funny story, and this does not mean for anybody listening, I'm considering a career change. It's well documented that when my time in software's over, I'm going to start a deli and make sandwiches for people. So I was screwing around the other night and I said, "I live in the metro west of Massachusetts. I want to start a deli that serves delicious sandwiches and craft beer. Can you write me a business plan, recommend vendors, and tell me how much capital I'm going to need to outlay?"
Can you imagine how much time that would've taken me to do by myself? How many phone calls, how many meetings? I literally ended up with something I could have sold my wife to go do. Quite literally, Jill would've been like, "Run with it. Let's go." And that's just the enabler, right? That much time was then saved.
We have one question, I assume this is a Brendon question from our chat. Will AI be developed to be sentient?
Brendon:
So the first thing is that nobody really knows what that means. There's a really interesting talk by a Microsoft guy who, I think it was ex-academic, his name's Sebastien Bubeck, and he wrote a research paper on basically their findings of GPT-4. And part of the problem is, we don't quite have a super science-y way of defining this stuff yet. So he actually switched to psychology to try and ascertain the limits of its thinking. He came up with a couple of definitions or so, just intelligence in general.
Can it reason? Yes, it can reason very well. Can it solve problems? Yes, it can solve problems step by step. Can it do? Can it think abstractly? Yes it can. Can it comprehend complex ideas? Yes. Can it learn? Sort of. We can talk about that in a bit. Can it plan? Can it come up with its own plans on how to achieve things? Not really. It's also missing other things, like it doesn't really have its own sort of set of emotions, which really are there to guide our behavior. They almost create goals.
So there's been a lot of emergent thinking now around using language models to achieve goals. Give them tools, give them a goal, and then they can work out how to do it, through a series of asking Google and looking at this thing and placing an order on Uber and whatever. So there's a lot of talking around there, but it's not going to do that itself. It's not going to go, "I'm feeling lonely today. I'm going to send a message to Ryan Barry."
But I think there are a couple of things, like the learning. I think one of the things that ChatGPT does is that it can learn over the lifecycle of a chat, but after the chat's gone, it's gone. And so there's an open question of, how do we take it further? How can we teach it more things? And yeah. The other thing I think is going to come around from the debate around AI safety, which is, should we try and give it emotions? How much tooling should we give it connected up to?
So I think the debate is going to be around semantics, and it's not going to be necessarily an exact moment. It's probably something we'll look back on and say, "It was around then. That was when it happened." I do believe it will happen. I don't see why it won't. I don't know how long. It could be two years. It could be 20 years. There could be another AI winter before we get there. So a big area of debate.
Ryan:
All I know is I'm excited to see what happens. This next 18 months, we should see some wonderful innovations, a step change in the velocity of companies, and we need it. It's tough out there, everybody. I can't thank you enough for tuning in live. For those of you tuning in after, I hope you enjoyed the episode. Steve, Brendon, thank you guys for taking the time to join the podcast today. I really appreciate it. It was fun to hang with you guys. We actually riff like this often. The truth is, we're just more well-behaved when we have you all listening because we talk over each other more when we're chatting about this naturally.
So some parting advice to frame this discussion, get your teams experimenting and thinking about what this means for them and their roles. Evaluate your stack and your data, your offer, and the talent of your people, and then watch what happens. It's a time to stay nimble and it's a time to stay well-read, because things are changing rapidly.
For those of you who are based in London and you found this topic interesting, Steve and I are going to be hosting workshops in London to help insights departments think through AI in their world the week of June 5th. If you're interested, please send me an email, ryan@zappistore.com. Space will be limited, but it will be fun.
Our next episode, we have two episodes left in season six, I can't believe it, is with Oksana Sobol, who's the Insights Lead from the Clorox company. It's a really, really good episode. And then our grand finale is with my dear friend, Matt Cahill, who runs primary insights for McDonald's US. So we got some heat left in season six, and we're already ready for season seven. So thank you all for listening.
Kelsey, thanks as always for putting us in new places. This is our second time doing a live podcast. It was a lot smoother than the first time. All another case for experimenting. Have a good day.
Be well, everybody. Thanks, folks.