105 | Fear and Optimism: Charting your course to generative AI
62 min listen
Generative AI is here to stay. So how can B2B marketers harness the opportunity at hand?
This week on the podcast we have a round-up from Twogether and Foundry’s recent event, looking at how to develop an actionable framework for Generative AI initiatives.
We're joined by Susi O’Niell, Head of Brand Content at Kaspersky and Matt Egan, Global Editorial Director, Foundry to go into some of the key discussion points including:
- The general feeling around AI in B2B marketing
- How AI is changing content creation and consumption
- A run-through of the Open AI saga with Sam Altman
Tune in now:
View the full transcript here
Jon Busby: So welcome to another episode of the Tech Marketing Podcast. I'm joined by two fantastic guests from our recent Foundry events. Susie O'Neill Head of Brand Content at Kaspersky. Susie, welcome to the podcast. It's great. I'm
Susi O'Niell: delighted to be here on this beautiful Friday.
Jon Busby: Grey Friday in London. And Matt Egan, who's a Global Editorial Director at Foundry. So welcome, Matt
Matt Egan: Thanks so much for inviting me along.
Jon Busby: So we hosted this. Fantastic event a couple of weeks ago now on AI in marketing. You do what Susie as a content creator? What stood out for you about how people were talking about artificial intelligence in B2B marketing?
Susi O'Niell: Interestingly, in our roundtable, we got a bit frank and really put it on the table.
Who's doing this? Who's planning it? And I think What I came out of it was the sense of fear and optimism at the same time, which are two really strange and conflicting emotions. So in our particular group, although we were all experienced tech marketers, mainly operating the B2B space, a lot of us weren't doing anything and we weren't being encouraged necessarily by our managements to do it.
And that's a slightly scary position, given how many companies are starting to use GenIA, set up boards, policies, et cetera. And we feel that. We're moving behind the curve. So is it our roles to do it, initiate it? So there was a sense of fear that we as individuals are moving behind because our companies aren't progressing it, or we as individuals aren't empowered to necessarily do it because remember, it's not just a marketing thing.
You can't just say, okay, we're going to start using. Chat, GBT or various tools. You, there might be policy implications there at the business and the security and IT level needs to be considered. But another interesting discussion we had was some of us in the room, remember the. com boom, and I started my career there, others didn't, and that's fine.
But someone asked, was it like this during the dotcom boom, the sense? And I said, no, it was much more optimistic. Now, maybe because I was 22, 23, it was my first job and I was very excited to be building websites using HTML and Dreamweaver. It was exciting for me. But actually there felt more optimism.
We've got this brave new world. We can create things and connect with communities. And that web 1. 0 revolution was super exciting. Whereas now there seems to be more reluctance and fear. Is this, are we going to lose our jobs? Is our craft going to be diminished as creative marketeers? So for me, it's that, that strange dichotomy.
It should feel like a brave new world. And I think for the tech optimists. It is an exciting sandbox and they want to make and create, but then I think for those maybe a little more experienced or those in the business side, there's a lot of fear about what this means. And also for myself as more of a creea=ative led director, I do have fear that we're going to lose craft skills through gen I content creation specifically.
Matt Egan: I think. Not that anyone asked me, but I'm going to wade in here because I wade excellent point about that analogy. And I also, sadly, I'm old enough to remember the transition from print to online, but from a journalistic perspective and, I can tell you there was a kind of a lot of fear and loathing and there was a fear of losing craft and we have lost those crafts, right?
When I first started working for foundry as IDG communications as it was. 20 years and one week ago, my job was all about print layout in a magazine. The job didn't exist within three months. The magazine didn't exist within 18 months. And here we are, right? It's been a wild ride ever since, but it's, you make a really interesting point because I do recall more kind of optimism.
And I feel like with the marketing event, what I really noticed was, it's self fulfilling because everybody was there for AI and marketing event, but everyone's really conscious and focused on this thing. Everyone's. Worried about what it's going to mean for their existing channels and processes, right?
There is, there's a clear opportunity to make things more efficient, but that's why people talk about the loss of craft, the loss of headcount, quite frankly. And I slightly worry that, and I hope, certainly the conversations I had seemed to, to open this up a little bit, but I slightly worry that's also slightly missing the point, which is that, you referenced web 1.
0, the web 2. 0 era that we're in now has been around for a long time, actually, in this space, and so people are very comfortable with channels and processes, ways of reaching audiences and customers, and they are likely to change, and I think you need to fear that if you're not adding value to the chain, if you're not in any way adding insight to your audience and your customer.
But I think if you are like any kind of paradigm shift, any kind of inflection point, Offers opportunity as well as risk. But I think you made a really good distinction between, the optimists and the rest of us may be in that, not knowing what the future looks like is legitimately a pretty scary thing, really.
Harry Radcliffe: If you are adding value within the channel, is it not possible that the value which you add is definitely replaceable by artificial intelligence? Particularly if it's like communication of products and stuff like that.
Matt Egan: When I say value, what I mean is in terms of insights, right? So let me give you the self serving example.
Okay. So the organization I work for Foundry, we have editorial teams in 16 countries. We're publishing. Brands in all of those countries and those brands are online and offline, but principally, we sometimes, and we shouldn't do this, but we sometimes refer to our brands and we mean our publications, right?
Our websites. So there's a challenge. There's a disruptor there. It's happening, right? Like the, like search is declining. People are using websites less. That's a challenge for us, but actually it's an opportunity because the insight, the unique thing we do is. All day long, we talked to 50 million it buyers around the world, right?
We have experienced journalists who are reporters, right? They're not channel experts. I would not expect them to understand how to innovate getting their information to the audience. I don't need them to. I need them to talk to CIOs and it directors and CEOs and understand their problems and understand what they're seeing and report those insights.
The current channel for people getting to those insights is, like anybody, like 60 percent of the audience comes through search, right? At this point, and that means every day we wake up fighting for their attention. We've got, we will get, I say in the US this month, we'll have a million people opening emails, right?
So that's cool. That's a million people who are subscribing to our email newsletters. But even then, during that continuous reengagement, the idea that those 1, 000, 000 people are finding exactly the piece of information they want at exactly the time they want is pretty optimistic, right? Whereas I feel that as long as we're creating those insights that we know are valuable to those users, and actually, again, being frank, if we're able to listen to the feedback, and if we're not creating valuable insights, like change and adapt, But with the unique thing we're doing to your question is those conversations, finding those insights from it, users and buyers.
As long as we're doing that, like the channels can change. But we're still offering something that's unique. And that's what gives me confidence. Sorry, Suze.
Susi O'Niell: But perhaps that Gen. io, though, you've got, those are brilliant journalists and insights. And the experts that they've got connections to is what we really want.
We want their insights as well. But then we can start to use some of these tools for doing format shifting. And for me, I always think there's all these Gen. io tools we're probably using in the background. You're probably using Descript, there are other tools available to edit this podcast. I love Grammarly, I do a lot of copy work and there are great tools out there for cropping or reformatting pictures.
And then, AI has been there forever in the system. It's just becoming a buzzword now. So perhaps there's very clever ways that we can take all that great insight, use the tools. But ultimately without people who understand the craft, who understand how to analyze data, but also that, what I call creative intelligence, that I know this is cool and interesting.
I don't always need an algorithm to create this in the first place that then, we can take that great journalism, but ultimately there will be craft lost in the process because you say Matt, that's just. I agree. Just evolution of tech, of any technology, isn't it? From the Luddites.
Matt Egan: I was excellent at laying out pages in magazines.
No one wants me to do that anymore.
Susi O'Niell: There are people who still do that for a living. There are. But they're not their skill is not highly valued, unfortunately, anymore, because the audience trends have changed.
Matt Egan: Yeah. And I think there always will be, right? I, again, I don't want to be disparaging of magazines.
I, I love reading magazines but it's not my principal source of getting information. I think you're exactly right, Susie. It's like You still need someone to understand what is the unique piece of insight here to promote it, or, and that someone might be the user. That's what's really exciting about generative AI, I think, is you can create language models of information that you know is valuable and unique.
And then you can allow the user access to it in their own way, or the other way around, which I know is something that's more prevalent today in the marketing world and something we're certainly looking at a lot. Is you can take what you know about that user and curate and personalize information for them to make it more relevant and interesting.
There's still always has to be the space. I think about things like, I look at the way my kids acquire music and they're quite young, but it's basically like the thing they heard and liked, they'll be promoted something that they that's a bit like it. And they'll listen to it. And it's very different to little Matt going to our press records on a Saturday and finding a yes, a vinyl LP that had the right kind of design.
And maybe I would or wouldn't like you. That's a different mode of discovery. So there's definitely a concern about homogenization here, but I think if our goal is to provide insight to audience, like the opportunity to give the audience the opportunity to source that insight from our content and from our insight is pretty exciting.
Really. We know vinyl is making a
Jon Busby: comeback though. I do. Yeah. Magazines could make a comeback. You never know. Those skills might be helped. It never went away.
Matt Egan: It's like podcasts. It's like email, right? The channel is not irrelevant. The channel is important only in terms of. The mode in which the user wants to consume the information and we're all in different modes all the time, right?
When we're listening to this podcast, yes, we're getting information, but hopefully we're being slightly entertained as well, right? We're not googling for an answer.
Jon Busby: I want to come back to your point around fear of losing craft, Susie, because I think that's, the fear and optimism argument is I think going to be central to everything we're going to talk about today.
With, do you feel there are going to be some crafts that are going to drop off like magazines, Matt, you've mentioned, or do you, and do you feel there's going to be some new crafts that are going to come up?
Susi O'Niell: That's the obvious, but are we seeing, what's that balance look like moving forward?
That's a million dollar question. If we really knew, then, we probably wouldn't be sat here running a really successful, gen AI company. I've heard it from a few different sides and I've also heard it from coders because often we just think we use it for copy or we use it for images here, but also heard it from coders.
And they talk about this idea that it becomes like a junior. So what I would call a junior copywriter or a junior designer who's coming in at their first or second job, it will deliver you an output that's a bit like them. But what you would do in a creative agency environment is you'd sit down with a junior and go, that's a great starting point.
Why don't you change this or that and work with that person who's more experienced next to you and we'll craft it and make it brilliant together. But it could be, I use the analogy that I started my first job when I was a student typing up. Letters that audio transcription, you know, and it was a pointless job because I type up deeply technical documents.
I didn't understand as a 19 year old into word perfect. I'm not told. And then it would come back with red pen going, no, that word's wrong. That was wrong. And it's nowadays that job's gone because people have learned to type very simple innovation, get the professionals to type and you don't need secretaries.
Everything goes through a cycle, and it could be that the new students coming out now, they will have more skills about how to use AI to support their work, and they will be doing the crafting and the prompting and coming up with it, but there'll be a period, a sort of, Saggy bit in the middle where the students won't have the skills because they're not learning at a university now, and there'll be this very much drive to optimization and cost reduction.
And for me, this isn't what is exciting about Gen Y. It's a real misnomer. And I think it even came out of your research that some of the audiences are saying that they want to use it for customer support, innovation, getting ahead of the competition. There's nothing there about saving money. That was way, way down, down the list as a lesser point.
It might be that the bosses say you want to invest. In gen AI tools, they are expensive to run as expensive to train people. You're not going to get an instant return. It really is about innovation and getting you ahead in a year's time or a two year's time ahead of your competition. The
Matt Egan: pressure today.
And the thing you're not going to get fired for is doing the current thing more quickly and more cheaply. But I completely agree. The research is really interesting because I think in the same cohort, it was like, like you say, it was maybe 48 percent talks about productivity, but then like the other categories around innovation and doing new things were all.
Like similarly similar numbers, right? So basically people are thinking about innovation, but they're also under pressure to do the current thing more efficiently. And your analogy of writing code of development is the perfect analogy, right? Because even like my roommate, my first year at university went on to be a coder.
And I remember him saying like in his first week in working in business, he was like, actually the skills I learned in university, I no longer need because. In those days, quite frankly, he was copying and pasting a lot of code. The skill he needed was understanding like from legitimately from within his own organization, but the skill he needed was understanding like the ultimate goal of the piece of software, and how to shape it and make it better. And he had, he's had a terrific career doing that and very innovative, but he didn't write a lot of code. And the fact is now to your point, Susie, where previously that might have been a junior person doing it. It can be done by machine imperfectly. Let me talk about what I do understand about, which is content creation.
And at this point in time, it's changed a lot very recently because of AI. But, I employ, we employ very experienced people, 30 year veteran journalists, and part of their work in life will be copy editing. It will be uploading a word document to a content management system, sourcing an image, all of these things can be automated and should be automated.
Not to get rid of that person, but because that person's time is used way more valuably talking to an end user and creating a piece of content and extracting the insights from it.
Harry Radcliffe: It seems like AI is most threatening to the people who are going to be most familiar with it, which is the kind of upcoming generation who are going to be.
Much better at, let's say, writing prompts, they're much more, they've been using it probably all throughout uni from what I'm seeing, but they're also the juniors that you're going to bung in and say, Hey, could you type this up? And now they're not going to have a place.
Matt Egan: There is a value in that junior person.
My first job working in newspapers was editing the TV listings, right? That isn't a job anymore, actually largely because no one cares about TV listings and everyone's TV has got listings on them. But yeah, and I was paid a pittance to do that for a year, but I learned a huge amount about how newspapers are put together kind of things.
I agree. And that's something as organizations we need to be really conscious of because there is a risk with all digital innovation that you end up with a smaller group of more experienced people. And quite frankly, you're not going to innovate your way forward if you have that right. It's a cliche, but you do need fresh blood and you need people of all different ages and views and outlooks.
You need diversity in every organization. Or you're going to end up agreeing with each other.
Jon Busby: The analogy that I keep hearing, Susie, on the junior piece is it's not like having one Einstein, it's like having a thousand interns. Which I think is, so it's essentially like having monkeys at keyboards, like eventually they're be right.
Susi O'Niell: Insulting to interns, I don't think.
Matt Egan: Yeah, maybe.
Jon Busby: Some interns. But the, which I think summarizes it quite well. I have, on the coding side, I've also had a similar career path to your roommate. So I was, I did nothing to do with development, ended up being a software developer. Did most of my copying and pasting from Stack Overflow.
There you go. A massive decline with Gen AI. But we are, we're seeing it quite, it can be dangerous as well. And coming back to your point on the craft, Susie, for those junior people to rely on it, means they don't. How do you think marketing organizations can set themselves up to bridge that saggy bit in the middle that you mentioned?
What, how can we start solving this? We know we need to move to using Gen AI. We haven't quite developed the skills on how to write the prompts and we have these juniors coming in. What's, what might be the process? Don't point at me and say junior brother. What might be the process that you, that someone needs to go through to bridge that gap?
Susi O'Niell: There's some, what I call the boring, but important stuff. And in the podcast that I present, Insight Story, we talk to Karen Quinn from Fnatic. And they have a, I am so impressed with their program and her as an individual, because they really innovated and said, they looked at it. In the mouth and said, this is coming.
How do we do it properly? Now they're a regulated financial services organization. They have a board that consists of people from marketing, legal, finance, tech, so that they're really thinking about it. They're training people. They're having learning weeks. They're telling people what tools they can use, what they can't use.
They're upskilling them. So it's really this full 360 view. And a lot of it's boring stuff about charters and policies and getting people trained up. But it feels like. Also, the government are doing this really well. Cause the government have seized on this and said, we don't want, civil servants going out and uploading confidential documents on chat GBT.
So they've also got their own enterprise versions of tools. So what scenario is business behind government, right? This must be the first time. So there's all this sort of boring framework stuff, but it's really necessary to get everyone on the same page. But then I just think a lot of this is about thinking about what.
What makes you human? I get concerned that all the time these technologies come along. And in the old days, you had to remember people's phone number. If you got arrested by the police, you had to phone home. Not that's ever happened to me. It wasn't in the villages of rural Nottinghamshire where I grew up, but, you had to remember.
Things and learn stuff. Whereas nowadays who would do that? But I find I'm even lazy writing emails nowadays. I thought I'll just stick it. plugin. I'll write a sloppy email and then I'll just fix it in Grammarly. But is that a bad thing? Maybe not. Maybe I'm not as conscious about how I communicate.
And that's the same through any content creation progress. Process. If you get too lazy and just say the AI will cut it up and shorten it. And I'll just, it's probably okay. You're going to start to see both lower quality mistakes coming in. We've certainly seen that with the publishers, like MSN doing full scale end to end AI publishing and publishing.
Disinformation and nonsense, but you start to get detached from the craft. Now, I think that's about teams and good leaders, particularly creative leaders, continually challenging their teams at all levels, the juniors, the mids and the seniors to say, use these tools to help you, inspire you, make it more efficient.
But don't just rely on that. Use your, go offline and use your creative thinking skills. And it's also, I think you've really got to think about what the grunt is in any job. Can AI take the grunt out of that? And that's where for everyone at every level, that's always what technology has done, and that's why you don't have magazine layout people or.
Copy typists or, basic art workers anymore because the technology has taken those very low level jobs away from creating.
Matt Egan: And I go back to this point about understanding where you had the value, right? Because you still need the people who so going back to the layout or the design piece, right?
We will have. We're very open about what we do in terms of using AI or not using AI. Like we always literally put it on the page in front of an audience member. If there's any AI involved in it, and we do use AI in some image creation, but it's always under the oversight of an art director using their prompts within the style of the publication that we're using.
Because I think, you described a perfect scenario with a regulated organization there of you have to get the core basic standards and rules in place and then operate within them. Obviously I work in the unregulated Wild West of of creating content to engage people. And but the same is true.
And it actually comes back from the audience side, right? If you're inauthentic, it doesn't work anyway. So what I think, and what we talk about within our group is you have to know who we are and you said. So it's such a good quote, such, such a right thing because why are we here? What are we doing? And in our case, we're serving an audience so that our organization can serve marketers because that audience is their customers. So reader services at the core of everything. So if you can use that AI to make that audience member more informed and more entertained, great. If you're doing it because it like to your point, Susie.
It allows me to be sloppy and do something a bit easier. Maybe that's fine in an email. And I used Google Maps to find the office today. It would have been different in the rural villages of Yorkshire where I grew up. And by the way, I always remembered my grandparents phone number rather than my phone number, because if you did get in trouble with the law, that was a slightly easier thing to do.
But again, we're digressing at this point. But yeah, if you remember why it is you exist, what it is you're trying to do. And allow the AI to support that. That's the point, I think.
Harry Radcliffe: Are we not lacking imagination a little bit? I can see definitely in ten years time that artificial intelligence is creating something that is way more entertaining than is even good for us.
And we couldn't really even imagine what that is right now, or necessarily be able to create some sort of What's it called? Hypernormal stimuli or something of the sort that I'm just staring at my phone and just couldn't look away from it
Matt Egan: Yeah More than it is now I think that's entirely feasible to be honest with and I think that's even more of a reason to hedge For if your goal is to provide insight and information, right?
You still need to start from a source of human insight and look never say never right? I think, Susie's comments earlier about being around when the internet first took over from print publishing. It's a really good lesson to learn, right? I can remember being in conversations where people talked, said the right things about the future, but didn't really believe it.
And none of us could picture the future looking like what it looks like now, to be quite frank. So I say, never say never, but I think making the distinction between the acquisition of insightful information versus pure entertainment. I still struggle, and maybe it is a lack of imagination, you may be right, but I still struggle to see a way in which genuine insight can be created without some level of human oversight.
Susi O'Niell: I think if we're going really far into the future, you're very optimistic, Harry, to think in 10 years time that we're all going to be alive and not on a burning planet. Yeah. But if we think about things like the metaverse, I know that's become horribly out of fashion, but we are going to start to see much more.
Fully digital immersive experiences. We can't imagine what exactly going to be, they're going to be like, but a lot of them will be triggered and stimulated by prompts and technologies. It's not necessarily always going to come from the human perspective first, but it will always be about what is inspiring.
It's inspiring to us, and we enjoy spending time doing and communicating with people.
Harry Radcliffe: Yeah, I went to an AI talk with a poker player who, yeah, who was talking about the most recent version of an AI that's now unbeatable at poker, which was an unusual thing for it to be unbeatable at. And yeah, she gave us eight years
Matt Egan: to live.
There's that brilliant Bill Gates quote, which I'm gonna murder now, but it is something along the lines of, people always overestimate change in the next year. And I definitely feel like we're at that point now, right?
Jon Busby: As a fellow content creator and podcast creator, Susie, like our days as hosts numbered.
Are we just going to, is it going to be all CGI and AI related?
Susi O'Niell: I think that comes down to the humanity point again, cause I did read a very depressing piece about fully end to end AI podcasts where you get them to originate the topic and the title and then write a script and then an AI voice does it.
Now, who would want to spend half an hour of their lives listening to that? I listened to a podcast lately where they got. AI to write the intro, a humorous one, pod save the UK. So it's a politics and humor and they were, they saw read it out verbatim what the AI produced and laughed and said, that's rubbish.
And I think that's the experience of people who really know their craft. There was a great webinar attended recently with Vicky Ross and Dave Harland, who are two brilliant. Business writers who really use humor and humanity and all the tools that you need as a copywriter to actually execute change to your audience.
And they said some things, I dunno if I'm allowed to say, but they swore quite a lot like, f you ai. I don't wanna hear any more about it. . Now you could say that they're in the past because they've learned their crafts and they actually feel threatened. But actually I feel it speaks to a point that when you are in a craft role and you've spent your life trying to work out how to make something.
Excellent. Not just 95%, which is what AI can deliver now, but you don't want someone to say, Oh, okay AI gets us 95 percent of the way there. We'll just get rid of that 5 percent that makes it brilliant and different. And it's that 5 percent that makes it going to make the difference between you and your competitor.
Matt Egan: Competition ultimately totally agree. And it's also about formats, right? Magazines still exist and they are, if anything, more crafted than they ever were, because the reason they still exist is, and I subscribe to magazines because I want to read them to enjoy them in the very few moments of peace. I get when my children are climbing all over him and demanding my attention.
And I feel the same way about podcasts as a medium. I listen to podcasts pretty much all day, every day, right? It's my thing. It's my time out. I like to exercise, listen to podcasts. Rarely am I listening to the podcast in truth for the thing the podcast is about. That's almost incidental. I'm listening for the engagement with the presenter and that use case of AI, it's a really good, it's a clever thing for pod save the UK to do.
It's an interesting, cool, funny thing. It's similar to our German editorial team last year, actually got gen AI got chat GPT to write their what is chat GPT article kind of thing. It's a cool, interesting gimmick, but the reality of it is it's the wrong use of the AI because we're not listening to that podcast presenter.
for their ability to knock a copy off quickly. We're listening to them because we enjoy their company. Now that presenter could and probably should be using generative AI to speed up their research at this point. And if there's something in that podcast that's a nugget of information that I want to follow up on, it'd be great if there's a large language model I can go in and investigate and interrogate that I can trust the information source behind to find like the research that I need, but I don't want to sit and listen to an AI all day.
It can be done like, like we have experimented with this. And the other thing that's interesting here is the human reaction. So when we did double blind testing with copy edits. Using generative AI, like it was difficult to get a clear steer when we told users this one was edited by an AI and this one wasn't interestingly, they said the AI copy edit was better, right?
They were more convinced that it had edited in a more efficient way. I couldn't tell, honestly, but that's a kind of mechanical task that was being done by the AI and the audience was people who do that job. Kind of thing. So the, it definitely did a functional thing well, but that's very different from writing an insightful or interesting or funny introduction.
Susi O'Niell: And then we're getting too concerned about output. So we should be thinking about input. So the inputs are like research, data analytics, what's performing well. And that's where a lot of these tools will really come to the fore. We're obsessed with the output side of it because that's, let's face it, a lot of marketeers spent most of their time in basically production, whereas what we should be doing is spending more time on the analytics side.
And I think some of your research showed that those were some of the bigger use cases, certainly in the tech marketing space. It's, if we're really using it and working it hard for data analytics, that's really what a lot of this is for. I think large language models and generation of copy, it's a bit of a red herring.
I would hope. I completely agree. I would hope just because the quality of what comes out is so mediocre. And that's what concerns me. Do you want to be mediocre as a business? If you're a startup, maybe you just need to, you're scrappy. You need to get it out there, do stuff, make things, build that website.
And it stops you from physically burning out as the only marketeer in the entire company. I met people like that who are in this position, but if you're an enterprise level business. I only produce something like 40 to 50 articles a year. I want to work with journalists and writers, and I want to make sure every one of those pieces is exceptionally good.
Matt Egan: Yeah, it's the difference between Depends what your output and purpose is. What was great in the Foundry event the other week was all the marketers there that I spoke to, and there were many. We're all invested in exactly what you're talking about, which is genuinely engaging their customers and providing some help and some insight.
And it is the difference. And, we do meet them in our customers meeting. You will meet a CMO or a marketer whose goal is the tactical goal, right? I want 10 articles. If you just want 10 articles and you're not, and you want them to generate leads and you don't care if they convert. It does happen.
Yeah, absolutely. Knock them out using chat GPT. Why not? But I think it has, you're absolutely right, Susie. I think it's completely skewed this conversation. Like open AI did a very smart PR thing in releasing chat GPT and Dali. Because they really showcase the technological capability of generative AI platforms, but they are not solutions.
Susi O'Niell: They don't actually do intended for consumer audiences. They're not intended for enterprise
Matt Egan: level. And you won't find, certainly Foundry, we block chat GPT's crawler. We block the barred crawler. We don't want our hard earned insightful information going into this. Quite frankly, like boiling cauldron of all of the, an out of date version of all of the internet, right?
We don't, we wouldn't trust an insight from that if we were Google searching, nevermind using chat GPT, the technology is incredible. Using it as a platform is not a solution, and you're absolutely right. It's not about the output so much as it is about imagine a model. I won't even say a large language one.
A model that includes all of that research, analytics, data, insightful content, being able to craft. Useful things from that is definitely it may not be the end goal here, but it is definitely a useful interim stage I would say how long
Harry Radcliffe: do you expect the work to be mediocre for though? And you know what humans produce a lot of mediocre work as well.
Matt Egan: And some of us are only capable of media
Susi O'Niell: It's an interesting point because every month it feels Something gets a little bit better. Even chat GPT, I was always very critical of it because as a content creator on copy, I want to know what my sources are. Otherwise, how am I going to do fact checking? Now they've started to do that and you can see some of the source materials, but yeah, it will go up.
So if we're at nights, just say we're at 95 percent nominally. Now it will go up to say 96 percent within a year. Then suddenly we're getting closer. Meanwhile, our craft capabilities go down. Our craft capabilities go down because we're trusting the tools to do it. Yeah. And that's the bigger problem that we will have as individuals.
We need to take individual accountability of our own skills. Don't expect your employers to help you here. And if your employer as it turned out in our round table at the AI event last week, they're not supporting a company wide program a lot of the time. Yet, maybe they never will. Because it's just another technology in the stack.
So as individuals, we've got to really think, how do we use the tools that we are able to in our work or our personal lives to make ourselves more productive and more skilled and stop that sparky bit of your brain switching off. And that could be about getting other creative stimuli when you've got a difficult thorny task, not just relying on.
All the data coming from the tools and all the answers coming from it as well.
Matt Egan: But that's the point. And again, beautifully articulated Susan, because the data and the insight doesn't come from the tool, right? It has an input from some point and that's, and there's a risk here. There's definitely a risk in it.
And I think you're driving towards this, Harry, to an extent, which is that, and it's also risk endemic in web 2. 0 and the web from its birth is that if you're able to scale things rapidly. It actually means the value to the originator of the insightful content declines, right? We see it, and you see it today, right?
The first thing the national newspapers in the UK do when they get to work in the morning is they look at all the other newspapers and they copy the story they've written, right?
Susi O'Niell: That's just a fact. How does that work exactly? Is it the first one to publish, copy their story?
Matt Egan: At the end of the day, it's a race to the bottom, right?
Like it doesn't matter, but someone has invested in that conversation to create that story in the first place and generative AI. Massively exponentially accelerates that process, right? So you put a piece of content on the internet, it's immediately can be. So again, it comes back to you need to be creating something that's unique and insightful.
So Susie's example of, let's face it. Susie works for a large organization, but she's focused on creating 40 excellent pieces of content every year because. Probably I'm guessing there's a certain number of customers that you want to reach and you want to reach them in the most curated and profound way possible.
And that's exactly the right approach. I think if anything, we need to go in that direction, which is do fewer things better and be unique in what you do. There's a further problem, which is a problem we're inheriting from the web 2. 0 world, which is what is the model that supports that? Because again, at the same time as is the ability to scale content has never been easier or the barrier to entry has never been lower.
feWer and fewer people have been employed to do it because again, the model didn't support it. So we, I'm not, it's not that I'm not concerned about that. I'm excited about it actually, because I feel like we're ready for a sort of new paradigm in the space of content creation where you are we're putting value in gets you value out kind of thing.
But I think we all have to be striving towards doing the unique and uniquely useful thing.
Susi O'Niell: Which you very neatly put it, Matt. This idea of curation and making things that, sure, making things that are very refined. That seems to be the antithesis of These tools where you're encouraged to just go in, make a thousand pictures, a million pieces of copy.
And from a content strategy point of view, because I learned my craft at Ogilvy doing content strategy for the big companies and believe me, they do not produce trillions of crappy low value blocks every year. So if you want to be successful, like a Unilever, a Mastercard, these big companies, they invest a hell of a lot of money on really well crafted.
information, making sure everything is on point and legally checked as well. So we're being encouraged to just spout out loads, loads more crap out in the world, to put it bluntly. And that's always been the enemy of good content strategy is just volume based approach.
Harry Radcliffe: You got 300 Spartan blogs and 10, 000 Persian blogs.
Matt Egan: There you go. You referenced it earlier and I, I shouldn't really speak to it, but there are publishers who are very openly use chat GPT to fill space on their publications. And what's really gratifying is it hasn't worked well, right? We value, as consumers, as end users, we value the good thing.
And that's not a good way of doing things. But we've got to get through this next change. It is going to be transformative and disruptive because. There is a premium on, on, on being the, I can't remember which way around is it, there's 300, what, Spartans? 300 Spartans.
Susi O'Niell: There's 300 Spartans, and it was around 10, 000.
But I've been there many times, to Theramopea, where they fought, and it, they probably exaggerated the amount of Men will exaggerate.
Jon Busby: Have I missed something here?
Harry Radcliffe: Bro, you've never seen the film
Jon Busby: 300?
Matt Egan: No, I actually haven't seen the film. I thought we were talking about history, of course we're talking about the movie.
Susi O'Niell: 300 brave, what, Spartans, but it's around Greece. Yeah, they saw on an army and they said, no, we'll have you and we'll come on. But they lost, but still they were seen as very brave because they took them on and they stood firm and then they won the next battle. So actually they lost that battle and they all died, but the next battle they did win and see off the.
Matt Egan: And what does this have to do with AI? How did we get onto this Harry?
Harry Radcliffe: Because that's, for 300 Spartans, keep up Buzbee, alright, the Spartans, the 300 Spartans are good blogs, and then the 10, 000 AI generated Persians,
Jon Busby: there you go brother.
Susi O'Niell: If you ever make it to Athens you can actually see the arrowheads from that battle in the Archaeological Museum, I'll give you some tips, some
Matt Egan: history tips.
Jon Busby: the yeah, I didn't carry on, Harry, because I know it's a lot of meditators, so please continue.
Matt Egan: What is your profession? We need
Susi O'Niell: to work this through to a better anecdote, Harry, because at the moment, it's hanging, isn't it?
Jon Busby: I thought it was brilliant, actually. I thought it was fine joined. You didn't know it, no way, no.
There you go, brother. Yeah, one thing that's going through my head as we talk about, AI usage is this concept of a soul. Is the soul start, there's 10, 000 Persians, as we just talked about, going back to history. Do they not have a soul compared to the Spartans? That's, that's what we're discussing here.
Is there a fundamental, with, apart from it just being mediocre, is there going to be a fundamental difference that it's
Matt Egan: just There has to be measurable value. Yeah. I think You know, in output, there has to be a measurable value, valuable value in output or the model won't support it. But everything I've seen to this point suggests there is, right.
I think, we've talked about this before, John offline around, we've created within Foundry, we've created large language models of our content, we publish them to our sites. It's really interesting to see how people like use them as a kind of answers tool. Number one, they do use them.
They ask like serious in depth questions and they share the insights, right? They share the results. It's not like to scale. I would say on our B2C publications where we're doing 20 million views, we're getting 200, 000 users of the app, but to go back to our Spartans analogy, like those 200, 000 users who are engaging with our content and extracting buying advice, that's a lot more valuable than someone who just drifts in and drifts out.
Yeah, I think we do need to have a soul, but in a cold, hard business world, it needs to have a a valuable output. And well, Susie's the content marketer here. She's the expert here, right? The reason she's making those 40 excellent pieces rather than a thousand less excellent pieces is I'm guessing because they work well.
Susi O'Niell: Also because you'd have to spend up more and more money promoting the thousands and you just cut the pie very small. And then, everything, the quality of everything reduces. So if you have lots and lots of people. Great quality control. Great. And it might be that some of these tools would allow me to scale up my processes, but continually I've been trying to produce slightly less every year and then applying that budget back to the audience development.
Cause it's going to get harder now to your point, Matt, of finding people through web search, because why they're going to be finding more of their search information through. AI tools like Microsoft Copilot is providing some of that information. So getting your readers in the first place, if you're a business or a publisher, is going to get harder.
Matt Egan: Yeah, I think also people are finding People are distrustful of the information they've found, right? Again, which speaks back to this idea of the internet being, being flooded with information that the barriers solo and, without wishing to give too much away, like within our organization, our audience is significantly smaller than it was, and the value that we're extracting from them is significantly higher, right?
Because it's the right audience and they're more engaged. And it's interesting because I still have conversations with senior people around page views. But it's what page views right? It's, we'd far rather have someone who is a engaged and be looking to use our information to drive their transaction than just someone who flies by dips into the page and dips out and isn't qualified.
Jon Busby: We were talking earlier about some of the different ways, different kind of content. Consumption. Yeah. How that might change with AI. We've been saying that web two is very, is where we are now, rather where we've been. And that's going to change moving forward. Like you said a moment ago, you've blocked Bard.
Are you worried about the Google SGE experience that they're rolling out in the US? Like how that might impact that? That's not going to, that's going to mean you're not going to get website views and clicks anymore.
Matt Egan: Yeah, absolutely. We are worried about it. And it's a live conversation and I don't have the 100 percent right answer at this point.
What I do know is that, 20, 25 years ago, when we were on our B2B business, when we were a closed circuit print public publisher, we owned our audience and we owned our relationship with our audience and everything that's happened in the web 2. 0 era has meant that the various people are taking increasingly big slices of that.
Our relationship with Google is very much a, it's a relationship, but it's an abusive relationship, like it's our biggest shop window. They provide the advertising platform, they take the biggest chunk of the revenue and we put all the effort into creating the content.
So that's not to say that's all bad at all. By any means, it's democratized the acquisition of information. It's allowed us to find new users for sure. But when we go into a new world of content engagement and publishing. We're really confident that our information resonates and we're really actually, if anything, where, you know, again to Susie's model, like we're going more down funnel in the content we create, we're getting more to what closer to the buying process, even though that excludes a significant number of people from.
Consuming it because the people who do consume other people we want and our customers need to reach so don't get me wrong. And if anybody from Google is listening, we are not saying we, we no longer want to be ranked for our content, but we definitely don't want to do what all publishers did at the birth of the internet, which is give our information away.
We want to own. 100 percent our relationship with our audience rather than sharing it with anybody, frankly.
Susi O'Niell: I think there's also that point where we're talking about empathy. One of the great experts I work with in, in the 40 to 50 pieces I write a year for our publication, Secure Futures, Minta Dial has written a brilliant book called Artificial Empathy.
And I went to one of the most mind blowing tech talks I've ever been to where people who weren't from a tech background were. Heckling and coming up with information like what's a bot and what's this? And it was really interesting seeing people who aren't from this tech background to tech talk, and then their minds are being blown by all this talk.
But one of the things he talked about was testing out this empathetic AI bot, which was an experiment, a particular organization, and he built a relationship with this bot, the bot knew he liked literature. The bot called itself JJ after James Joyce, and they started to build this real sense of camaraderie.
And it was actually a tool that was being used for therapy. It is interesting, these tools have existed for AI bots for therapy, actually, for many years, around 30 years, I think, so it is possible to actually code in empathy into some of these tools, which you could say is frightening, but perhaps there is something there as well, that there's a way in interactions anyway, I'm still not convinced about the content creation side of making AI more empathetic, but also we need to be thinking much harder about ethics and the I'm recently recorded a podcast with Tomoko Yokoi from the IMD business school in Switzerland.
And one of the things she talked about and what they're educating their MBA level students about is that it really actually expensive to be more concerned with ethics and empathy, right? There are many charters out there, but businesses don't know how to operationalize that. But it's much cheaper to just get straight into the production side and the efficiency and the making.
And that's ultimately what's happened with open AI this week. So from what I understand from the latest from today is that the nonprofit board effectively threw out Sam Altman because he. He wanted to accelerate it much faster and he was getting concerned with making the profit. Whereas they were actually set up to change humanity for the better.
Jon Busby: I originally think it was the other way around. Yeah, so it's probably worth giving some context. We're recording this now on Friday the 24th of November. If you've been under a rock for the last week, nothing's changed because Sam's now back at OpenAI.
Matt Egan: The
Susi O'Niell: board has been sacked. Most of the board have been sacked and the board was set up to control this.
technology to stop it going out of control. There's now more concerns that it is going to get out of control.
Matt Egan: Let's think about it this way, because there are echoes with Microsoft's origin stories here, actually, when, software was seen as this open source thing that, it doesn't exist, right?
Anybody can code it. It's handed out there, and Microsoft weren't the first, but they were the first to really go in all in on monetizing software, right? Microsoft owns 49 percent of open AI, exactly as Susie said, set up a board that was of academics and scientists and philosophers and people who existed and believed that this technology, was going to make the world a better place very much in the way that, the internet was going to make the world a better place.
And I believe the jury is out on that particular outcome and then The CEO is trying to make it a business that makes profit, which I, all I will say is this, it's interesting that when the board said, no, that's against our charter, Microsoft, the 49 percent owner of open AI was that was the first safe place that he and all of his staff were able to find kind of thing.
So one way of looking at this would be that this is the kind of Microsoft all along has wanted to own this space and be very profitable in this space. Was very happy for the Trojan horse of chat, GBT and Dali to be part of a nonprofit organization that they own 49 percent off and is not very happy for them not to be able to then monetize this technology.
So nothing's changed, but everything's changed because now it is very clear that the mandate with the new board, which is all investors and capitalist people, clearly open AI is now going to be dedicated to generating. Profit.
Jon Busby: I think, there's so much stuff we can dive into here, so let's go for it, right?
A couple of things let's, like, Satna, what a baller move. And what a, and also, what, this is gonna be a business, this is gonna be a case study in business textbooks for corporate governance, for diversity on boards, for loads of different reasons, for the next 20 years, I think. But that tweet is probably one of the best written tweets that he put out on Monday morning.
Just to show his frustration in a very polite way with what, what's happened. But I think you're right. We are. With the way that I've Any new technology comes out, and I'm gonna try and butcher this, but you go from that stage of unintentional harm to intentional harm. And it feels like maybe we've just crossed into intentional harm.
Susi O'Niell: This is in The Wired today, so Friday the 23rd of November. Sam Altman's second coming sparks new fears of the AI apocalypse. Yeah, yep. The AI apocalypse. And that's what we were talking about. Fear. We should call this episode fear and loathing of AI. I don't know. There's supposed to be something, an AI discovery that OpenAI are working on that could threaten humanity.
Now we've heard this a lot. It's easy to see this as wallpapers. It's easy to see this as wallpaper and go, Oh yeah, another one. Another one saying it's threatening humanity. But these people are working in this space every day. Is it Graham Hinton, the Godfather of AI, he's very concerned about. All of this, and it is about control and how it's measured.
And there is so little concern with ethics. I was at a discussion at the formerly the Cass Business School yesterday with academics. And again, some of us were frustrated by it because there was no discussion about copyright ethics. It's all about let's use these tools to be more efficient.
Matt Egan: Yeah. What can it do rather than should it do it?
Harry Radcliffe: It feels like there's a gun on the table. And everyone's going to grab for it and there's one person who's considering the ethics of what they're going to do when they get that gun and they, that's not the person that's probably going to get it.
Susi O'Niell: OpenAI, the ones who wouldn't have fired it.
You're better off
Harry Radcliffe: grabbing for it and then considering the ethics once you're the one with it. Then being like, oh, what if it's not, what if no one should have it, and then
Matt Egan: I think that's probably right, but historically that hasn't always ended well. No, 100%.
Harry Radcliffe: It's very unlikely, I mean it is a race to AGI, and it's very unlikely that the fastest way to do it is the safest way to do it.
Susi O'Niell: Do you think our listeners will know about the technological singularity, should we explain that a little bit? Let's go!
Jon Busby: You're actually, Susie, that's the first time I've heard someone raise that concern, and now I'm thinking, holy, that might be where we are. Like, is this the end of, is this a foreshadowing episode?
If we want to talk to ourselves, let's leave ourselves a message for five years time what would we tell ourselves now?
Matt Egan: I Would say, yeah, all the tins of beans, all the tins of beans are in the cellar. So just feed yourself
Susi O'Niell: well in short for the listeners who may not be fully sci fi compliant.
The technical singularity in simple terms is that moment where you can't tell a human and a machine apart. So the Turing test was actually, it was actually based on a Victorian parlor game where you have to say. Who the person was speaking, but we could say that we're starting to see that now. I listened to an AI voice actually on an audio book the other day.
And at the end it said, we're doing some tests and this is a machine generated voice. I did not know that. So perhaps some of these moments we're also getting, is that if you don't know, it's written by a machine or you're listening to a machine, is that already? Yeah. We started to get that moment of singularity.
Matt Egan: Every enterprise is doing at least the first pass of its customer service using bots now. That it is. It's harder to tell at what point you do engage with a human in a chat now. I think you still can
Susi O'Niell: tell by the way, I've got a really good shortcut that's worked for me when I've gone on a bot and thought, oh no, I've put in the phrase, human please.
It's immediately connected me off the bot to a human. It's like pressing nought, when you want to get through to the main menu. It also seems Put in human, put in the word, I want to speak to a human, something of that, and it will be maybe picked up as this person's frustration.
Matt Egan: It's just replies.
We should all have a t shirt as well, just walk around with, I want a human please, it would be a nice thing. Human the
Jon Busby: same when you do call centres as well. Yeah, so I didn't realise this until we, we started, obviously we helped some of the contact centre companies. with their products. Like AI personality routing is now a thing.
So they route you to different agents based on how your person, like they will monitor your personality on the call and route you to different people.
Susi O'Niell: From your way you speak or your copy in the chatbot?
Jon Busby: Bit about bit all of it really. The only reason I really have seen this is I've got a colleague trying to get through to, let's say a very large organization, in this case an airline.
I seem to be able to get through, and they can't. There's definitely something different going on with
Susi O'Niell: how it's used. It's your white male privilege. Yeah, that's what it is. But
Jon Busby: the, or rather just he, maybe he just calls them up too much, and they're like, we just cut him off every time. But, it is, you can start to see these things in how we get treated.
Are we starting to head towards That kind of black mirror future where the AI could decide whether we get an opportunity or not in
Susi O'Niell: China It's already like that Oh, you'll have a lot of social proof and if you want to get services if you pissed off enough people They'll be like, I've heard
Jon Busby: people can get like a WeChat fine for jaywalking.
Harry Radcliffe: in China, you can walk across you can jaywalk in China The camera will recognize your face and take the money out of your bank account with no Intermediating sources and then your local community will be shown the footage Of you jaywalking. You'll be on a wall of shame. And then your social credit score goes down and they've even put facial recognisers in welding machines.
So if you're a welder, and and you've been jaywalking on the way to work, you might turn up and have crossed that line to the point where the welder won't even turn on for you anymore. Because
Jon Busby: your social credit score down. I did see a shot of someone that. Parked over the lines in China and they had a giant stick on the side of the car to be like to shame them So it is that we are getting there, aren't we?
Susi O'Niell: actually, you say it sounds very apocalyptic That's because we're seeing it from our values and China's a very populated country It's all about community. If you spit on your community, then your community wants something back and actually maybe that's a good thing You've committed a minor offense.
You're not gonna get arrested for it and that money goes back to the community It's a different way of thinking, but it is quite frightening when you think about the personal invasion and the lack. And I'm concerned with some of the things that I've been reading lately. One use case was about using auto translation using AI in American immigration, where people are coming from different languages.
There aren't the translators available. But what's happening is they're using this machine translation. It's not quite accurate. And the people are They're really at risk and they're saying a lot of things and then it's misinterpreting them. And then there's no intermediary to say we don't understand this person or we think they've even committed a crime when they haven't.
This is no human intermediation to say, this is wrong. And in the case of these people going through the asylum system, they couldn't talk to anyone, right? Because no one understood their language. That's why they were using the AI to even say, this is wrong. Help me. There was no way of them getting any help out of this vicious loop of the
Harry Radcliffe: I think the most modern Android phones, you can call someone who's talking in a different language and you
hear them in your nose,
Susi O'Niell: but it's not necessarily an accurate translation, particularly. For niche languages, because it's only the mainstream languages that are more accurate. If you speak Farsi or a more, whatever language or a particular language or regional dialect, it's not going to a hundred percent accurately.
Jon Busby: This is a really interesting point, especially as a podcaster. You, we. From a brand safety perspective, our podcast may be reviewed, like the transcripts may be reviewed by AI to be like, Is this safe for our brand to advertise with? And that, there can be so much lost in the sentiment of something in how it's said.
And I don't, exactly what you're saying with translations just in how you might phrase something in like a slight piece of punctuation can make a massive difference in its actual meaning. Yeah. And I think that's the bit, that's the risk, I think, with over reliance on artificial intelligence. If someone's shouting something compared to whispering it.
Matt Egan: It could have an entirely different meaning. If we go back to the conversation about outputs, or operationalizing today, the it's the first pass. Perfect. Or the first part, so we do translation within our organization. We publish in multiple different languages. Of course we use machine translation.
It's a lot quicker, but wouldn't publish something without a human being seeing it kind of thing, and I think it's the same way with transcriptions. There's two ways of approaching it. If we publish a transcription, sometimes we publish it. We say literally. This has been machine transcribed, right?
So the end user knows, or again, you have a human being go over it. That's fine. We can do that. We can control those outputs. But the examples we're talking about here is if you've got a bad actor controlling the translation, like that is that is terrifying because who controls that, and I think Susie, you make a really important point about not, us not applying our values to the way a Chinese community would work.
But I would say I do not trust our government or our government organizations to, to control my information in that way, kind of thing. Do you think we should be more regulated with AI then? It's a really difficult thing, right? Because I do, but I also don't think politicians should do the regulation.
It's like a constitutional thing. And I don't know how you get from where we are now to that. I will say and Susie referenced this before, I feel like in some ways, government In this country is on the front foot on this. Like it has at least been recognized. And I'm not talking about the AI summit, which was like a laughable PR exercise,
Susi O'Niell: but it did bring the right people together.
You can always see that as a starting point for the next discussion.
Matt Egan: And I'm part of the PPA, which is like our trade body for publishers. And we have lots of conversations through that board with the civil service, and they are at least actively thinking about trying to put together because the challenge here is it has to be concepts and precepts rather than specifics.
And, I think we'd all recognize this, right? If I deal with a legal team within my own organization, wonderful people are there. They want to be really specific about platforms and tools. And by the time they've Written it down. It's out of date kind of thing. Yeah, it's interesting what's been happening in the States as well.
I think they're also making a reasonably good fist of trying to legislate for how this stuff should be worked. It's not perfect. But it's definitely required, it has to be taken out of the hands of politicians of all stripes, because it's too easy to use this as a sort of power tool, I would say.
Susi O'Niell: Regulation is happening, there's EU regulation, which again is a grey area about the UK, but Tomoko Yokoi, who's a guest in my podcast, ethics and AI podcast episodes. She did talk about 250 plus charters around the world. So there's lots of legislation happening already. It's going in a good place, but it's very hard for businesses like it was with GDPR.
There needs to be a whole educational curve. What does that mean for our business? What's our compliance here? Compliance is the first step. Then ethics is the next step beyond that of. We have, we have to get compliance, and then we need to think about what the ethical implications are, which also includes how we treating people and how we treating data and
Matt Egan: think GDPR is a beautiful example against Susie because yeah, it's imperfect, and like it was the big monster that was going to destroy us all. But actually. Like we put the effort into to apply it and it's similar with the laws in Canada or in California, like in the end, an organization like ours, it's just so easy to stick to those precepts.
And then to your point, think about the ethics and it's not hard, right? It's a level playing field for everybody. AI is exponentially more complex, but I feel like if you've got the similar principles at the base of it. It is possible to legislate for better behavior for, or for acceptable behavior, let's say, so let's start to bring
Jon Busby: that to a summary then, Matt, because I think you made a really valid point.
I was the same with GDPR when it came in. What is this? And now I'm like who is emailing me? You're not allowed to. Yeah. I completely understand, now how much value that adds to the user. What should your question to, to, to both of you, what should be to be marketers be doing now?
With a I,
Matt Egan: Oh so I would say right now, look at your current challenge, your current channels, your current processes, the current way you operate and optimize, optimize, optimize as much as possible because you can do it immediately. Try and personalize, try and curate, but be very cautious and careful about how you do it right.
If you're not sure, don't do it at this point, but be thinking about it. At the same time, have the thought in your head of what does a universe look like where we're able to contact our potential buyers at almost infinite scale, what value do we offer to the chain? What is it that's unique that we can do that supports and helps them and will further engage them and then think as creatively as possible about what that might look like and be open to that transformation journey.
Susi O'Niell: Absolutely. And I would say two tracks, one as an individual, educate yourself, get involved, come to brilliant events. Like the one with Foundry last week, really spend some time researching, spend some time playing around with the tools on your personal accounts. If you can't do it at work, so you understand how things work and then start to bring that conversation up to your management.
What are we doing as a business here? Can we build. At least a steering group initially of bringing some people together to talk about what does this mean for our business and don't just see it as a marketing thing, at least try and make it work with it and legal. If they're not interested, then maybe you just focus on what you can control, which is ultimately what we can do.
But sorry, it's a cliche quote, but you've got to still think about it. Your job's not going to be replaced by AI. It's going to be by someone who knows how to use AI. And another thing, if you're a content creator. A little naughty idea here, but one that artists are adopting is called AI poisoning. So if you don't want the AIs to illegally use your copyrighted materials like your pictures and your copy You can put in special code now that changes the metadata because it's all relying on metadata So you upload a picture of I don't know a nuclear submarine and say it's a cat That's going to really mess with their models.
And then you'll know they've stolen your material. So I love this idea of the anarchy of creators taking back control. And if everyone started to do that, then these AI models that are ripping or claim that they're using good copyright, but aren't. They're not going to work. And then that actually starts to build a more ethical internet.
And I would love to have a future. We start to maybe see it with Getty now where the content creators are going to get considered and are going to get royalties when their materials are mixed up or used as source materials. It hasn't happened. Too efficiently in other mediums like music, but it did happen eventually, so we need to start to have that same sort of debate about reuse within these tools.
Harry Radcliffe: Thanks so much for coming on the podcast today, guys. It's bloody brilliant.
Jon Busby: That was a great, that was a great end. That was really good. But yeah, I just some of the wording that you've been, we've been using throughout there, like fear, the language that, that Susie you brought in fear and optimism. I think we are that turning point.
And one, one phrase that, we tend to use here at Together is the worst thing you can do right now is procrastinate over it. The other analogy that I've heard someone use is, this is a runaway train, you need to make sure you get on it now, otherwise you will start to fall behind.
Susi O'Niell: So I think we're all aligned there that if you're working I don't think you need to though.
My, my take on it is it's a journey, so you need to start going on the journey, but don't worry about those ones. Steering ahead of you. Okay, they're probably just you know, he
Matt Egan: butchers the analogy It's actually it's a run
Harry Radcliffe: originally it's he's on a runaway train it doesn't matter if you're crawling As long as you're on the train.
Jon Busby: Yeah, so it's but I agree like it is a journey and you need to need to get on it So no, thank you very much for joining us today. It's been a real pleasure to have you both on Yeah, we hope to have you on again soon.
Susi O'Niell: I'm looking forward to it, if we're still alive and not underwater on fire in ten years
Harry Radcliffe: We'll at least use your voices, so