106 | Applying AI: Practical insights from a Tech Marketer
68 min listen
Warning: this podcast contains some seriously interesting stuff.
Following on from last week, we're joined by Beth Redpath Katz, Global Director of Integrated Demand at VMware. Beth has worked on some fascinating projects.
This episode, she walks us through how VMware transformed a legacy Salesforce system, incorporating machine learning AI and diverse data sources to achieve near-automated delivery of pre-forecast opportunities to the sales team.
Yeah, we weren't joking. This is a real insight into the clever ways AI can be adopted.
And if that weren't enough, we also touch on:
- The biggest mistakes execs make when introducing new tech
- Why marketers should nurture their data-driven mindset
Tune in now, wherever you get your podcasts:
View the full transcript here
Jon Busby: Welcome to another episode of the tech marketing podcast. It's a pleasure to have you here. Of course, I'm joined with my fellow cohost, Harry, as always in the booth with me. And as, as he's a regular guest, I think, we have on the show is Mr. Sedger. Jonathan, say hello as always.
Jonathan Sedger: Hello listeners Thank you for having me
Jon Busby: once again, but I'm very pleased to be joined. In the virtual podcasting booth by Beth Redpath Katz, Global Director of Integrated Demand Services at VMware. Thanks for saying that. Hopefully I got that
Beth Redpath Katz: correct. I love production. Yes, I look after integrated demand services and also lead management for my sins.
Jon Busby: I think we're going to dive into a lot of that today, but we, we recently. Hosted this quite fantastic event, which you were a key part of with the foundry on how to incorporate AI into your marketing strategies. And not only did you join us as a brilliant panelist, but I think you're one of the few people that really have been able to execute on this year.
Yeah, Beth, what does it mean to you?
Beth Redpath Katz: AI is I guess it's the new big thing, right? So let's talk about 15 years ago, cloud was something that was a buzzword. I think at the moment. AI is in that sort of phase of technological evolution where it's still a little bit of a buzzword.
It means different things to different people. Is it the raw form of machine learning or is it really into the generative AI and it's. purist form around language, et cetera, et cetera. And there are various variations, and I think that the corporate landscape will continue to use AI as a bracketed term for many things over the course of the next couple of years before you really start seeing it being Evolved into groupings off parts of technology, which corporations will understand and recognize university.
So I think there is a there's a point of a level set of what does AI mean for an individual corporation and how are they utilizing it within their different processes? I
Jon Busby: completely agree. The completely agree. I think we are at that wild west stage of AI a little bit. We were talking about that last year with the metaverse, and it feels like we, that has definitely been replaced by artificial intelligence and there's still people, a lot of MarTech companies are still figuring out the right.
to incorporate it into their offerings. Let's just bring us back to the event for a moment before we jump into to what you built. What were your key takeaways from the event? What did you we've all got something to learn with this rapidly evolving space. What did you learn?
Beth Redpath Katz: Yeah. I think it was quite interesting for me going in, having been in an environment where as far as I'm concerned Our corporation in general is, it's just dipping its toe in the water. You've got other corporations tech bayonets like the mics of Microsoft that have been using AI for five, six years already.
And yet you've got other folks like us who were literally just about the start of our journey, yet in the room in general, the consensus was that AI was still very much a new, a very misunderstood term or something that they are still just about trying to work out the strategy and And how they're going to integrate it within their tech stack.
And I think that was the most interesting part for me, was actually, maybe we are a little bit more advanced than other people within the marketplace. But also the way that we're using it is quite innovative compared to others. And I think that was a big takeaway. And I guess the other side of it is utilizing it for different purposes is something that marketers very much want to understand.
They're not just saying AI is here and will continue to do our jobs as the status quo. They actually want to get to a point where they understand very intrinsically how to adapt and apply AI in a way that I guess compliments the humans within the department. And I think that is also another really big learning is that lots of people, maybe a year ago, looked at AI and said this is just going to replace my job, so I'm going to oppose it.
And I think marketing is taking quite a refreshing approach of, okay, so AI's here, I can't get rid of it. It's expanding and evolving in the type of Moore's law effect. So how do I adapt? And then. integrate this so that I can live alongside it rather than it replacing me. And I think that's really key takeaway from those learnings.
And I think that some of the other comments from within the room, it's like, how do I then, master ABM and put AI involved to make me run faster? How do I. Apply this to multiple languages and make me run faster and cheaper or better than my competitors within the marketplace.
Or how do I adapt this to meet business goals and gain market share where marketers are often now pivoting again. Towards new logos rather than retention in expansion accounts largely as the economy has shifted to make it a little bit more difficult for some marketers to really make their impact.
Jonathan Sedger: You talked about feeling that your team are. probably a bit more mature with their application of AI than other people in the room. You also talked about the fact that AI is yeah, actually a broad term for lots of different technologies. And I think we all know it's quite complicated.
Do you think that's the reason that's one of the reasons that held people back is that understanding and Being able to put the join the dots between their problems and the things That ai might be able to
Beth Redpath Katz: yeah, I do and also, you know talking back to the event and the research that car foundry actually started to bring up and ran some stats through and i'm sure you guys can lace some links into the to the bottom of the podcast to show you that research, but they were showing around the fact that one of the big fundamental, areas is around the education and the training process of individuals.
And I think that piece without it a little and I'm going to go back to the buzzword cloud, when it was presented with virtualized instances to run workloads on. 15 years ago, they opposed it and they weren't adopting not because it wasn't going to make their lives better, easier, cheaper, faster, but it's because they didn't really understand how to apply it into their environment.
And a little like AI, I think the crux and the biggest. piece of the puzzle for any individual, irrespective whether in marketing or just generally in business is how do they educate themselves on the application and how to utilize it within their own environment is really key for that acceptance.
Of a new technology coming in often you'll have executives within any corporation within any department who get it, but they won't necessarily allow every single level below to understand it. And that's where you get that frustration. And you also get the opposition of people to really embrace.
And so I guess that comes back to a game back to the cloud analogy, the difference between purchase. Adoption and consumption. So I think the same could be applied to the idea around AI. And I think that's probably why specifically companies like VMware have done so well. So within our very early phases of trial and error around how do we utilize this?
How do we potentially introduce this into our business? Training programs were rolled out very extensively across departments. So that broad brush education, understanding and acceptance started right at that early conceptual phase, rather than in a later phase where it was already potential to create friction or frustration across different user groups.
Jonathan Sedger: It sounds like you've had a good experience in this case with VMware providing that training, but you talked about other scenarios and other businesses where there's this kind of, maybe a group of executives that have that knowledge. And you said that there's a reluctance to to give, uh, a broader access to that knowledge.
Do you think that's, what do you think drives that in some businesses? Is it that they just feel that there's, that there's not the kind of time for people to go and focus on understanding these technologies and actually they just want them to go and focus on their little bit or is there another reason do you think that creates those scenarios?
Beth Redpath Katz: I think it actually comes down to a term that many of my colleagues, very trusted colleagues, have always used with me is the term of assumption. And it's just the case of They assume at the executive level, if they get it, that there will just be consensus and collaboration and using, and usability of said tool.
Anyone within marketing can probably relate when you're using, I don't know, HubSpot, Pardot, Eloqua, Marketo. And then at the executive level, they make a decision to swap out Pardot for Marketo, or Marketo for Eloqua, or Eloqua for HubSpot. And you're presented with, you've made that decision for a very good reason at the executive level, but that it's just the assumption that everyone's going to adopt it.
Everyone's going to consume it and everyone likes that decision. And it's a little like AI and it's a little different with AI because I don't think that there is really anything that has exists at the moment, maybe a bit for the machine learning, maybe a bit of intent or propensity data, but not really a model where you have.
a tool that does what AI can do in the, in, within the MarTech landscape. And so you're presented with this new tool. It's oh, this is going to answer those questions. What questions do they answer? And how do I integrate that? And so you're always presented with this is the decision and it's just the assumption that everyone's going to like it.
So I feel like in business, the biggest mistake that executives make is the assumption of knowledge, education acceptance and adoption. For a tool that they really don't, people really don't understand or can get where they're going to utilize it within their landscape. And I would
Jon Busby: say the challenge, the adoption of AI this year has shown that how big that gap can be, because in fact, Jon Busbyathan, me and you were having a debate about this early in the week, the amount of.
Of change that people need to understand and get behind right down to the coalface level is astronomical. Let me ask there's an interesting thing that kind of has been going swimming around my head when we talk AI. So we've been, we run this wonderful event with the foundry. It was really.
Interesting to see some of the conversation, some of the division, I think, in the room over those people that have adopted it and those people that haven't, um, and I would definitely say, we keep talking about the hardest thing or the worst thing to do is procrastinate with it because you'll just fall behind.
And I think that's. The story that you've been on at VMware just demonstrates just how important it is to get started because yeah You guys are miles ahead of everyone else. But what one thing that's been going on my head is just we've been There's an awful lot of talk but without much being said when it comes to AI.
Yeah, do you think that is because Some companies are as far ahead as you are, but they just don't want to talk about it. Or do you think genuinely there is a big divide and there are some companies that just don't even know how to get started?
Beth Redpath Katz: I think it is the latter rather than the former.
I think there is generally a lot of companies that haven't quite figured it out yet and are too risk averse to really think about how it could benefit their business. and the fear factor with a lot of companies is don't introduce it if it's going to cause risk. And I think in within the roundtable at the event, that very much rang true for a couple of the companies there where there's the willingness.
tO embrace, however, the risk adversity of the general business is if this is going to cause issues, just don't go there right now. We are, we're already doing a lot of things. And I think if you bring it back to the marketing department you marketers are being tasked to do more with less.
There is the economy is not in a place where we can ask for bigger budgets or bigger headcounts or bigger salaries or bigger everything. So we're now having to spin 25 plates, being asked to refine them into three or four great ones. And then we are potentially dipping our toes in the water or looking at how we can then spin those plates by having less effort.
So working smarter, not harder, but the risk adversity of the business is then opposing that change. So I'm just saying. I've historically talked about executives who make a decision at that high level and then expect adoption and then they get frustration or resistance in the troops. Equally, the troops could be very open, um, but the risk aversity around the rest of the business.
Just doesn't allow for the AI adoption. So I think instead of lots of people being very far advanced in their AI journey, they just don't want to say, I think there's more people that probably have a lot of plans. There might even be a lot of shadow IT utilizing AI in very more ways. Lots of POCs going involved.
Lots of almost think tank type stuff. But I don't think from a business perspective they're allowed to talk about it because of the risk aversity.
Jonathan Sedger: Do you think they understand the risks? Do you think there's a good understanding of what the risks are and how to mitigate them?
Or is it just this kind of big scary thing for
Jon Busby: a lot of people?
Beth Redpath Katz: I don't, so I think that there are not enough people that have been educated In a way that they are well versed enough in AI and the power and its positives and negatives because everything has a positive and negative. We could go back to the cloud analogy again.
There's always a risk around putting something in a public cloud. So critical mission critical infrastructure with security concerns always sits either in house or in a data center in a secure environment. So like AI, there's always going to be a potential for it to have negative impact or risk around security etc, depending on the type of company, depending on your type of registry or compliance issues, there may well be certain types of things you can't physically utilize or and or adapt with AI, right?
So that's, but that's a one aside, but yeah. From there, you've got people trying to utilize it in a way that then advances themselves, but they just It's just too difficult. It's too difficult for them to talk about. I
Jon Busby: think that's
Jonathan Sedger: a really good point about the different types of risks, and I think that's key.
And I think it's what a lot of policy makers are doing at the moment is like categorizing different risks, right? Because we all know about, everybody's heard something about the risks. Copyright infringement, biases, this, data security. But I think for each different use case, there's going to be a level of risk for those different things.
And so it's important, as you just said, to identify like what are the high risk things that should be avoided and what are the lower risk things. And I think ultimately you're never going to get, it's never going to be perfect, right? It's I like the analogy of driverless cars.
Yeah. Yeah, I think they've set this standard of driverless cars need to be like perfect before they'll release them into the world. But when you compare them to what there would be replacing, which is humans are not perfect at driving like how many accidents happen on a day to day basis.
So it's not to say that we should ignore risks, but we should look and go. Yeah, if you were doing that process manually, how much human error would be in there? Yeah. Versus doing it with the AI and if the AI gets that wrong, what's the risk to the business if that happens? I
Beth Redpath Katz: think it also then comes back to using your analogy when you've got humans driving a car, when a human crashes or a human Run some, someone over, et cetera.
There is a certain amount of accountability that human has been given and that human will be punished in a certain way. How do you punish a computer? So from that perspective, the level of accountability, it's do you blame the person that wrote the algorithm behind the AI? Do you blame the person that.
It has applied that AI to that piece of content that whatever it is. aNd I and I think it just comes back down to the base level of human nature. Whether you would like to say it or not, because not, everyone will say you don't, we don't have blame culture in our company, but at the base level.
If someone does something wrong, someone has to be held accountable for that wrongdoing. And our whole our race as a human race is based on this is wrong, this is right. You've done something wrong, so you are punished, or you are sanctioned, or there is something, there is an accountability aspect.
When I think the biggest concern around AI and the risk is Where does that accountability set? And businesses are then no longer able to say, Oh, so that person requires a written warning because the AI went wrong, it muddies the water. And therefore, the element of risk, does that then sit on the business?
Does that then sit on the, Are you gonna knock on Microsoft's door and go, Hey, chat, GBT has just cost me a five million dollar suit because I wrote something that then they thought was plagiarized because I didn't write the prompts properly and I thought I'd written the prompts correctly.
Like, where does it end? Where does the buck stop? And unfortunately, Irrespective of whether you have a gorgeous culture of collaboration and friendliness or you have a blame culture in a toxic environment, eventually humans are going to want to hold something or someone accountable.
And with AI, it's not clear, and I don't think it's going to be clear for years.
Jon Busby: We need to upload the
Beth Redpath Katz: AI into some sort of physical form so that we can
Jon Busby: beat it
Jonathan Sedger: up. Or just switch it off without shutting
Jon Busby: down. They hate that. Would you be
Beth Redpath Katz: more upset if you got a broken leg from a human driver or a robot driver?
I think I would be more upset from the robot driver because there's no emotion. So there's no reaction. No one's feeling guilty. So often you're, if you're in a. in a driverless car and an accident happens, there A is no human, so there's no physical person to say You caused me harm.
I am and you need to apologize and often just a simple human Emotion reaction and I guess this comes back to a couple of really great books I've read around the future of the professions, etc. And it just takes the principle of If you, can you take every part of a professional job, right? So we're talking white collar worker type jobs.
And can you pass them on to computers? And the point is that, yes, someone at a robot could cut your hair. But are you really gonna feel that personal approach where when you go to a hairdresser, do you really go to have your hair just cut? Or do you go there to have a coffee, a chat, a human interaction?
And that's the difference, right? John goes and sits in silence. So
Jon Busby: Yeah, I don't like it when they try and talk to me. That's why my hair is like plastic. Thanks, Harry. No, but they
Beth Redpath Katz: did that research where they had the, The robots going into old people's homes and to try and reduce loneliness.
And it was the, I think the outcome was that the robots that weren't programmed to show any form of reaction or human type of reaction didn't actually increase the loneliness because they felt even more disconnected from the world and the ones that had. Some form of programmable human reaction or human emotion into it.
They actually reduced marginally the loneliness. So it just proves that you can get an amount, but you can't replicate pure human. reaction. So then the people that then had the extra visits from humans, like they're alone. Like a gazillion percent versus even the robots with the emotion programmed into them.
Jon Busby: Let me try and translate that over to some of the, we've not talked about the project you built best, but let me translate that because, you struck on an interesting point here. This feel like we've been going through this wonderful. philosophical debate, but similar to the trolley problem.
If you're familiar with the trolley problem and I think there's a great, there is a great novel about self driving cars theorizing around that as well, but essentially. A similar type study was done with they trained an AI bot. This was many years ago now before some of the AI we've seen today to create classical music.
And when they told people it was built by AI, they were like, everyone was like, it's soulless. No, I can't listen to this. It's absolutely soulless. There's no way I can. So they were, there was a lot of. Negativity. So they invented a persona for it and they gave it a name and a face and an identity.
And then everyone was like, this music is brilliant. This is the next Mozart. And like it, it achieved this level of of, no tired. It was amazing. People loved it. And in the end, the owner decided to destroy this classical. trained robot this composer, because he was like, this could do so much damage.
And so he decided to disband it. But just that one association with whether it's a human or not made a massive difference in how it's perceived. So let me move that into marketing for a second. If we're now creating a hyper personalized pieces of content based on AI, are people going to perceive it differently if they know it's Correct.
Generated by AI as opposed to being written by a human.
Beth Redpath Katz: And I think that's the thing is as humans, are we, can we be trained? Will we know, can we spot the differences in AI generated content versus not, I would say. If it's purely AI generated right now yes, you probably can because it won't have the same sort of human tone of voice.
Irrespective of what prompt you put in there, you are always going to have what feels like a little bit of a long winded almost scientifically written piece of content. It's very glossy. It's very great. But does it really have those human errors? I would say human generated content that then has had AI enhancements.
I would say that once you. You refine and refine the process. I would say that's they're probably more more acceptable, um, but at the end of the day, I guess I could flip the question is if companies like Microsoft have been using AI for the last five to seven years and almost haven't, have not created any original content for a number of or a length of time, right?
And we accept Microsoft content as human or near human almost exclusively. How much content is already out there? Back to Jon Busbyathan's point of are they all producing AI type of pieces and no one knows. Who's to know that AI isn't producing 50, 60, 70 percent of the content out there already.
How do we know that we haven't already read a book that has been produced predominantly by AI? with the right sort of prompts from an author. How do we know that someone like a, I'm gonna use Terry Pratchett as an example because I think it's a relevant one, but how do we know that all of his works hasn't been put into a bot of some kind, and they go, produce the next book?
Wouldn't that be awesome if they could? Because I'm sure there are lots of readers out there that love authors or have loved authors like that and would love to have confirmations.
Jon Busby: Do you
Beth Redpath Katz: think you said that there'll be human generated content with AI touches on the end. Do you think that's better than AI generated content with human touches on the end?
Yes. Okay. Interesting. I truly believe that the original content should always originate from human because you can then. As a base and I'm not talking about, oh, it has to be human generated titles or human generated subtitles. It I think the base body of the content and the original idea should come from human.
Also from a marketing standpoint. perspective, because I know we keep going down some rabbit holes, which I love but the basis of your ideas, your, the challenges that your customers or your prospective target market faces inherently will be understood better by a human, and actually, sorry marketers, understood better by your sales team than any marketer will understand.
So those base principle challenges need to be met with solution primarily from a human response. Because, back to your point, humans react and interact with humans at a different level to computers, and they always will. So I think that the content should be, the original base should be human. And then all of the manipulations and permutations, et cetera, should then, or could then, be made around AI.
Jon Busby: mean, that, that makes sense. Anyway, from things like model degradation, like, we're going to reach a point where if we just keep allowing AI To create content and it trains itself, there's going to be no more original thoughts in the world.
Beth Redpath Katz: It opens you up to plagiarism, right?
Because if your AI creates your basis of your content where is the line? Between original content produced by an AI versus a plagiarized amalgamation. And then, like when you're, I'm sure when you're, everyone's been at A levels or university or GCSE. It's taking one person's work as plagiarism, taking lots of people's work as research.
Where is that line for AI?
Jon Busby: I'm going to use that excuse next time. Actually, that's a great excuse, but the the although this is where I think the whole academic side with AI just starts to frustrate me because, it's almost double standards. A lot of the, a lot of the professors, teachers, teaching assistants, teachers, all using AI Yeah.
And in some ways using that to even mark the quality of the work, you're not allowed to use it to create it. So in the end we're going to end up in a situation where you're probably going to have AI creating AI being marked by AI and that's about as far as that's where
Beth Redpath Katz: we're at. I've spoken to enough students and a few of my friends are teachers that it is just chat GVT check and chat GVT.
Absolutely believe it. Unfortunately when I was at university back in the Dark Ages you couldn't do that because the
Jon Busby: book We still had to write stuff on paper from what I
Beth Redpath Katz: remember when I was back in university. Yeah, I was going to say, most of my essays were written on paper. All of my, My exams were written on paper despite being highly dyslexic I could have used a computer if I had wanted to but I lived in a, in an era where I was at university that the books, Hadn't been written or they were being written, but by your lecturers, so there was no ability to go online and research something within your field.
And there was absolutely no way that an AI could find the content because it hadn't been created yet. I think we live in a much more evolved world now where a loss of content and a loss, you get to a point where, are there any net new discoveries? Of course there are. But are we doing degrees in them?
I would say not as much. Not in the same way as when I was doing, neuroscience and pharmacokinetics. Some of the research just hadn't been done yet. It just hadn't been released. So from that side, I think. Yeah, it will be chap gbt. And I think even, I, I read a side article in a newspaper a few weeks ago around, students opposing the decision that they, their work had been plagiarized because chap, the chap gbt checker had basically pulled out popular phrases that they had used from like legislation or documents and so specifically being penalized in the medical and legal areas where documents that are well known have to be paraphrased or quoted are coming up as plagiarism because if you use over a certain number of words that is, that are identical, they are then matching it to plagiarism.
And actually, because no human actually checks it, students being penalized saying this could be potentially plagiarized. You now need to prove that it's not. And it's someone actually read this because actually probably what they've said is. Potentially profound. And
Jonathan Sedger: I think play prohibition never works in any scenario, does it?
So I think over time the kind of the way that people learn will change and actually, I think when it comes to education and obviously there's a lot of relevance there in marketing for training salespeople and sharing training internally, but yeah, I think. There is a people are looking at it the wrong way from an education perspective, because yes, people can use it to fake that they know something and put in a really good paper, but also it could be used as a tutor that can, give you the level of attention that a teacher or a lecturer is never going to give you, to have that sort of two way conversation.
Yeah, I think we're only just starting to see, maybe. How this is going to be used for education and I think it will change and I think over time people will embrace it and see the power of it at the moment. It's just like quite a sort of 2D view that
Beth Redpath Katz: people are taking of it. Yeah, and I agree.
I think marketing aside, I think the education sector is probably the one that has to have the steepest curve. in How the applications of AI and how the advancements can actually benefit and not negatively impact the industry or sector, should we say. And I think that's something that anyone Listening to the podcast with kids can probably appreciate ones that are coming up to exam times or tiny small people that will potentially be affected by having to learn English and maths until they're well old enough to drink.
And that's all I'm going to comment on from that perspective, but it will be interesting to see how that evolves over time. Let's bring it back to
Jon Busby: kind of marketing for a second. And one thing, you've. Give us a very high level overview of kind of what you've built, of some of the different things you pulled together.
Beth Redpath Katz: gonna focus on the project that I'm primarily responsible for. At VMware, one of the big areas that was up for debate, should we say, was the Lead Waterfall. And as you'll hear from many analysts, community members MQLs are dead, look at the news, everyone's writing about it, that they don't mean anything, they're too narrow they don't actually translate into sales or pipeline, or if they do, it's just by luck rather than anything else.
They also a lot of negative comments around MQAs or Marketing Qualified Accounts where it's too broad, it doesn't show you anything about the humans and the individuals going on in the account, and MQA, as many people know, is very aligned with ABM, and account based marketing, or ABM, is like the new buzzword for everything because it allows marketers to basically streamline the way that they are spending money and and focusing in on few initiatives and doing it very well to maximize ROI around the value output of marketing.
Now, how do you measure all of that lead waterfall? They're saying that the lead waterfall is dense. So where do we go next? So it's in the buying group area. Now the issue with buying groups is two fold. Firstly, You can't really in any normal CRM, maybe you can, but I don't, I haven't seen a great use case of it.
Create a full buying group across multiple members of an account, the account level in a singular object that doesn't involve having to step on the toes of sales ops. And mainly that's in a pre forecast opportunity object or an object of some kind within salesforce or the And to pass over to sales as a qualified account with a buying group that's activated And then there's also a debate around what's activated versus what's not engagement minutes, you know Then you get into intent and propensity and how do you do all of that in one go?
create an output that then can be meaningful whilst all the time, majority of. And I'm saying majority and I'm sure there are marketing partners hands up in the air at the moment saying I never say that I've hit my KPI while sales hasn't. But there are quite a few marketing departments out there in the world that will hit their MQL threshold, go, yeah, that's the drop the mic moment.
Well done marketing. We've done our job. While sales is sitting there saying we're still only 70 percent towards our. Our revenue pipeline or our revenue goals this quarter. So marketing you haven't really done your job So there's also that misalignment between sales and marketing so what vmware kindly did was ripped up the kpi framework working in alignment with the strategy and operations department we looked at how we could realign that KPI framework to actually answer the issues that sales was facing.
So how do we deliver accounts that are activated or engaging at the right level of the buying cycle to actually introduce more opportunities within the forecast area? So how do we deliver pre forecast opportunities at the level of quality that delivers value for sales? So we had to take a legacy area of Salesforce.
It was on prem, almost 22 years old. I'm sure there's probably people listening to this that are younger than that.
Jon Busby: I still, I'm still amazed that exists actually, Beth, but that's a whole other conversation. I'm
Beth Redpath Katz: still flabbergasted that we managed to achieve what we did with what we had. A very lovely version of Marketo.
And believe me, that's not the only version of Salesforce and Marketo within our organization. You can imagine there's multiple instances of it. And then we looked at how we so phase one was. Can we automate lead flow with the use of lean data with Salesforce and Marketo integrated into it?
We achieved that. And then we looked at how do we then create buying group with. little to no human involvement. Now it, the original few four step process did involve humans going into to double check data to double check accuracy. But we utilized additional data sources and this is probably where we get it from the machine learning AI.
Is it. Purely machine learning is AI. Have we integrated? And this is where we started doing some really clever, cool stuff around. We utilizing demand based one both SISD and normal licenses, zoom info, uh, tech target. And also LeanData with Salesforce and Marketo integrated to in go from a singular lead that had interacted within our business to a point of scoring threshold that would become an MQL.
It then comes into our system, then gets matched to a an account, we then convert or merge it into a contact. With that enriched data, we then enrich the data of the account and then we activate that person as either a net new buying group or add it to an existing buying group depending on its product interest.
So when we deliver across a activated buying group, we can do it in one of two ways. We can either do it as a as a flag to the sales team that there is a buying group in production, or we can actually produce a full one where basically the salesperson knows where, especially if it's a strategic account or an account of high importance.
That they have new people or existing people within our database that are becoming activated and engaged against a particular product. Lots of people say I've got loads of products. Probably not as many as VMware. We have something like 257 unique products. So how do we do that at that level?
We don't. There are, I would class them as 12 product main groups. of which we say this is aligned to that product group. So sales team, as you can imagine, there's lots and lots of sales teams within VMware, we would then align it to that sales team who will be able to ask the exploratory questions.
The other thing that we did within this process is obviously I've talked about a lot of structured. I'm potentially unstructured data that we amalgamate together and lean data learns as it goes along to them, become more intelligent around account matching around contact merging and contact creation.
But we also enrich the data around pulling some unstructured data from our back end systems, which I won't go into because it's way too complex. As you can imagine, it's like a chess board of different data sources to pull in some unstructured data so that the sales team can then see the history of that account, etc.
within that opportunity object. Collect all of that information in a pre forecast of opportunity object and then send that across to sales. As I was saying there was also another unique element to what we do against all of the analyst recommendations. We do it without a telequalification process.
Jon Busby: no BDR process there at all. Nothing.
Beth Redpath Katz: No not at this point in time. Now, what I would normally say and normally would do is at the point of buying group activation, partial or full on the time threshold that you are giving for that buying group activation threshold to be met, to pass over to sales, you could then get the BDR SSI.
DR or even ISR, so quota and non quota bearing pre forecast pre salespeople to get involved to then develop that opportunity further to maybe we would class it as a stage two, so a forecast potential opportunity, or even get it through to a, an early opportunity forecast stage. We chose not to because our, I classed our pre, pre sales departments focused more on outbound rather than inbound.
That's not to say that we couldn't do that. The, I guess the preforeca pre sales departments have been in some What of a flux position where they've mixed between inbound and outbound. We just didn't have a great use case with across certain regions for inbound leads to be followed up by BDRs rather than, instead of just outbound, pure outbound activity in the ROI that produced.
It made sense that we pass these over to sales to then develop and then we had, deals around programs that was all around the opportunity and then that was a lot of the data that was in The opportunity object could then be utilized and informed the decisions around what campaigns and programs that we put them into from a deal surround program.
And that was also very helpful because it worked, allowed us to work hand in hand with the digital COE for them to do that. And also then also letting them to do things like intent, utilizing intent data at the top end. To then continue to re engage and engage those individuals. So it was, it's a, I would say it's structured and unstructured marketing, productivity, tooling, and or intent propensity, tooling, and motivation and stitching together to deliver automated or near automated pre forecast opportunities directly to sales.
Without human interaction,
Jon Busby: which is just, which is, and we haven't even gotten to the content creation yet from that piece, like just, I guess you touched on this a second ago with your digital CRE, like what were the main challenges in getting this built? What were the, what would you say the kind of two or three main things you had to address when you were
Beth Redpath Katz: creating this?
I think the, there's a few things. So first it was making sure that IT really understood what we were trying to do. And getting the buy in from sales ops because technically we were treading on their toes. Marketing lives in leads. Sales lives in contacts. Let's not go into each other's world.
So I guess the general consensus and executive sponsorship was. We have to work, we have to live in contacts to be able to continue to speak the language of sales if we are removing inbound. Pre sales follow up and that was like almost used to utilize as a statement And the second barrier was like I said, it was around it making sure that they knew that they were going to get the professional service credits or professional service support to enable this to happen.
Party will only really block a decision if they feel like they're going to have to put additional resources with no additional budget and or human effort power. Because they always have a list, as long as you're on, around high priority, keep the lights on projects. And that's not exclusive to VMware, that's for every corporation in the whole world.
IT is always maxed out, they're always understaffed. To get to the top of the queue or to even get on their queue, you need to prove that you're not going to impact their day jobs and you're also going to give them the support that they need. And also you're going to empower them as the champions of this project from a systems perspective.
Side note, this was also really cool. I liked it. This was something that meant that they were stitching systems together in a way that They'd never done before. And the case study that would mentioned from a company called Siemens had won awards in, in previous times. So we were one of very few companies that had done it.
We were embarking on something super cool very new. And I think that was the second barrier. So once we won over it, I guess then the biggest barrier. Was not sales. Sales are like, this is cool. You're gonna make it easier for us to pick up pre forecast opportunities and then develop them.
So realistically, all I need to do is convert it from this stage to that stage. Okay, cool. That's fine. I'm not going to get 20, 000 emails in my box around a lead. I'm going to get two or three emails. So these things that I need to chase after brilliant, I guess the biggest barrier was marketing and I am a marketer, so I understand what marketers do and I, I love filled marketing.
But realistically, the biggest barrier was then the process where you had to negotiate with field marketing that this was not going to be a do less or get less budget. But it was a, but you do need to activate a full buying group across a singular account. And the harsh reality is that with any business.
It's activating the user community and I'm going to use VMware as a great example. Kubernetes, user community, put something on Twitch you're there. But to get that IT decision maker and all the C suite to engage at the same time on the same topic and engage to a level where you can tell that they're in a buying cycle.
That's tough. That's really tough for any marketer. So all of a sudden you're, you've gone from saying, Hey guys, can you just run around the racetrack really fast? Can you now do it in half the time? And by the way, I'm going to put hurdles in your way. But
Jon Busby: that's exactly what the year we've gone through is, right?
Everyone always wants to do it in half the time and yet we've doubled, we've tripled, we know, actually I don't have any stats to hand to say this, but we've increased the size of the buying committee significantly. So I think
Beth Redpath Katz: that is, that's the crux of it, right? So if you look at books like the challenger sale and the challenger customer, et cetera, back then when they were written, it was like, Oh, the buying committee's going from three people to seven people.
I think now we're at 15 in a lot. Yeah.
Jon Busby: 13 to 15 at least. I've seen some people say as high as 18. This
Beth Redpath Katz: is crazy. And everyone says. That's crazy. But when was the last time you made a, 500, 000 plus decision? And how many people in your company did you have to get involved proportionally to the number of staff you have to make that buying decision?
Then everyone, it's a bit like, okay, so you're gonna buy, I don't know, a new car and spend 50, 000 on it. Do you just go out and buy it or do you consult your partner or your family member or your, your friends? Do you go to the garage once and buy online? Or do you go to the garage three, five, ten times?
Do you go for a test drive? Do you research it online? Do you watch a video on it? Do you YouTube it? Do you look at the reviews? Of course you do. So why is it any different? So marketers sometimes are like, Oh my God, the buying committee. It's so big. And you're surprised, you're going to spend this amount of money on it, on something that you've then got to live with for a minimum of, I don't know most corporations, it's one to three year contracts.
That's even if you can
Jon Busby: unstick it as well. Exactly.
Beth Redpath Katz: And that's the point, right? Is that everyone's so surprised that the buying committee has increased. But actually it's reflective of the buying behaviors of individuals. Yes, online's gone up massively But for small individual purchases, luxury purchases are still predominantly face to face It's still predominantly going through a complex buying cycle.
It's predominantly still consultative rather than product product based sales. It's all around human experience as well. And we could go to another rabbit hole, but that's what I'm just saying is you can't expect. Any buying decision to be less complex when we actually make our own buying decisions in a more complex way.
In the real world as individual consumers into the B2C world. So B2B is only a reflection of the complexity we've made our own environments.
Jonathan Sedger: Do you think that one of the reasons that you saw some? Or you see this kind of hesitancy on the marketing side is that kind of switch from what you were talking about before, which is, mic drop, we've got our MQL.
Some people have been spending some time on our website, looking at content, filling out a form, we've got 700 of those job done guys. Brilliant. That actually proving success with buy groups is much harder. And is
Jon Busby: that what the
Beth Redpath Katz: fear is? Yes, that, and I'm going to say something fairly controversial.
Is innately majority of marketers are not as data driven as they should be. I don't think that, I
Jon Busby: don't think that's controversial. I think there's, I think there's a debate. We have gone way off AI here, but this is a fascinating debate as well. I think there is that, um, balance or rather divide when it comes to data driven marketing, which is those that are performance driven that see figures and those that prefer alchemy.
And see things being much more focused in the Rory Sutherland way of thinking, which is, you need to include some luck and some and take some chances. Totally.
Beth Redpath Katz: But I think. You pull on a really good point though. And this actually can be related back to AI, right?
Majority of marketers that are data driven in a way that they will report on the figures on the output. So the likening to that would be your marketer that just creates. user, human generated content never personalizes, never uses AI never does anything, doesn't even put even research or links in it.
Then you've got marketers that use user data to help inform decisions at the input. And then relates back to the input data to relate back to the output value data points. And they give some insights into why they've made decisions and then how those those decisions have affected the output based on a baseline.
Great. So that's, those are the sorts of marketers that would write a piece of user content and add the links and put loads of research in it. And it looks really quite cool. The type of marketer I'm talking about from a data driven perspective is that they will use. Intent and propensity data. predict how their market is going to react.
It will do checkpoints regularly with the data that they have to then do predictive marketing as well as proactive marketing to make sure that they're matching human, humans from accounts to the areas that they're actually looking at buying and have. So I've understood. So they're much more intelligent in the way that they run their campaigns.
They've also done split tests around baselines, and then they have already made predictions around their outcomes. And therefore, as soon as those data points aren't aligned to the outcomes, they're stopping, they're failing their different campaigns. They are looking at how then They evolve their campaigns or put them back on the right track to then get the results.
As expected, if not better, and then they come to the QBR with this is what I've done from my, my, my baseline data. This is how I've predicted the data. This is how I changed the data. This is how I've reacted to the data points to then drive the right outcome plus the 10 percent extra. And this is what I predict for the next quarter will happen if I put X, Y, Z budget in and out.
Those are the. Data driven predictive marketers that will, yes, and yes, they will do that. But then that is the human generated content that's then being added on and personalized with AI and they are thinking about things in a different way. And I think that there's too many old school marketers are like actually I can run the same campaigns.
I've always run with a few different tweaks that will produce the right level of. Output that will hit near enough my target every quarter in, quarter out. I'm on budget. I'm on time. No one's rocking the boat. And there's just too many of those people out there that are then not wanting to go, Why would I make the effort to look at all of this data to then predict what I'm going to do, even if it's the same?
Even if the outcome of what they do is similar or the same, they don't utilize that data point. So I think that marketers just aren't data driven enough at this point in time. And the lack of data driven approach is alien to sales. Sales have been data driven for years. And years and it just misaligns with how sales talks, how sales thinks, and they're saying, great marketing.
You've rocked up with some results and some numbers, but why did you do that? You can't just keep saying, because I've been working in marketing for a a number of years. It let's liken it to a photographer. A photographer will know exactly why they've asked you to move. Your head by two millimeters towards left, because they will understand the trajectory of the light of photography is science.
Marketing is science at its rawest point and without data and taking a data driven approach. It's like just putting lithium in a bowl and adding water and kind of seeing what it's going to do. And some marketers will end up with a hole in their ceiling and some marketers will end up with the great result.
Jon Busby: That speaks to someone that's been playing with lithium at home there Beth. The the In some ways, as a developer, I couldn't agree more. I much prefer things when they are boiled down. We're going to use lots of science analogies now, aren't we? To the individual elements.
So you can say I put in X and we get out Y.
Beth Redpath Katz: There is magic in marketing. There is sometimes where you put X and Y together and you get B squared. And you're like, Wow, that's great. But the data driven marketer We'll understand why they got B squared and we'll be
Jon Busby: able to Or at least, I would say, at least go and look into why.
I think that's the biggest thing, like it's about having a growth mindset. Let me ask this question, like with AI and data, right? We've been talking a lot about data here as well. I think the two things are intertwined. They're never going to, you need good data in order to be able to get a good output from artificial intelligence.
Yeah, same thing. And I think we've really also been talking about, let's call it augmented intelligence, because it is based off. The yeah the human input here, but do you think, do you see the use of data, the warehousing of data and the use of artificial intelligence as a way of leveling the playing field for marketers?
Are you use the. A phrase a few moments ago where you said a marketer would sometimes rely on the fact that just their experience. I know this is going to work. You, do you think those days are numbered now for marketers that are purely based on experience and gut feel?
Beth Redpath Katz: I think there's always going to be a space for those marketers.
I don't think that element of marketing will ever die out. And I think though, the type of marketers are all. still use experience will be less. The new market is coming into the field and more of the marketers that are the CMOs are the creative directors. And I think there's always a great place for the experience.
There's nothing that can be a marketer that is. Put 15, 000 people into a, into an email distribution list and then sent the wrong email or sent the email with the typo.
Jon Busby: We've all been there, we've all been there.
Beth Redpath Katz: We've done some horrific and horrendous things in our times, right?
And we all have learned, majority of marketers in my generation have learned through failure and learned through their successes and they have evolved how they approach marketing. To make themselves better and it's only through their failures that they've really learned I think that they're the new generation of marketers will still do a little bit of Trial and error and still fail and succeed and have that approach But I think there will be an evolution of more data reliance on How they do things.
And I think that comes down to how businesses have evolved from, okay. 1 percent of revenue goes to marketing. Like that's how we work out the marketing budget. 1 percent of revenue, that's marketing budget. Here you go here, spend some money, try and make. More money than you spent now businesses are wanting justification, and I think that comes down, down to data.
Jonathan Sedger: I guess for me, it's the opportunity with AI and data is, it's not about removing that cycle of failure and learning. It's actually about increasing it so that you're not making big failures, but, you're learning from, Lots of testing, which ultimately, when you're testing something, there's one of them's failing, one of them's winning.
And that's how you're learning. So I think
Beth Redpath Katz: that works, Jon Busbyathan, when you have an environment where you can fail fast. And this then comes back down to the differences between tech startups or any startup, right? That's because I work in the. Tech world. So that's where I would analogize from, but any startup will be more agile, will have lean principles or Sigma six principles or agile principles they will be able to fail fast.
They will have close feedback loops, which will be almost in real time. Now, when you're dealing with a large corporation where you may well be running a digital program and you might have a fairly healthy budget to run multiple tests. You may well see there's an issue, but where you are, then stifled in your innovations is you would then have to tell your manager that there's an issue.
The manager would then have to analyze the results, then go to their manager and potentially have that conversation at an executive level to then be able to say, actually, yes, that's okay. You can switch that off because there will be very little to no impact. I think that's where AI is. And large corporations don't match because AI is fast.
AI will evolve in a very speedy way. And a colleague said to me yesterday, and he used a train analogy. So another analogy on top of another analogies,
Jonathan Sedger: this is the podcast with
Beth Redpath Katz: AI. It's like a runaway train. You just need to be on the train. It doesn't matter if you then need to crawl through the carriages to first class.
It doesn't matter, but you have to be on that train. And I think, unfortunately, there are a lot of corporates that just don't have the speed and the agility to get on that train at this point in time. And you will also see tech ancillary companies who need to embrace. AI or enable their services for AI where the race is already finished.
If you haven't already started integrating AI into your service provision, you may well be too late for the early adopters. And therefore, you laggard community, which will be the Bayer Moths. Which will be the big corporations, which is also a great business opportunity for that for that segment.
But with lots of businesses that are growing and you can see that their share price increasing, they're the ones that have already jumped on the AI train. They're already securing their place. They've already developed the technology. They're baking it in as standard. And they're the ones that will then be plowing within that race.
To embrace that AI piece. I just think marketers, unless you are on that train already, you may well be too late already. And so the task will be, how do you get on that train and how do you speed up?
Jon Busby: I think I love that analogy. I think that's a great way for us to close. It's bring us to a bit of a close here, which is.
get on the train. I think that is, that's a great way of phrasing it. Like I'm, this has been probably one of the most, I don't even really want to end this debate. I, we could just keep going. But the, but I, I just for our listeners benefit and just cause I'm just as intrigued, like how.
You are one of the few marketeers, I think, that have really been able to put this into production. Some of the things we've talked about today, like you, not only you able to enrich those leads, but create content against them without BDR follow up, without human, without humans in the loop.
How would you approach looking back and if you would have given your past self some advice, what advice would you give your past self?
Beth Redpath Katz: Do everything faster, like seriously, like I did it fast. I did the, even faster. Yeah, just if you can get that concept out quicker.
If you can bring the stakeholders in earlier, if you can get that collaboration and that consensus in at the concept phase you will end up just being able to do everything. In such a better way. I mean we have come across so many bumps in the road. No compliance issues or regulatory issues. One trust issues everything you name it.
We've come up against it, you know it freezes system issues bug issues everything we've had to battle at every single step of the way. So I think It's do everything as fast as you can, whatever you think is fast, it's not fast enough, be prepared to have a very thick skin when you're doing this.
You will come up against scrutiny. You will become the, not hated, but very disliked individual because you will be positioned as a disruptor. aNd I guess the last piece of advice to myself is, Don't be afraid to be disruptive. In a market where everyone is The same, stand out, ask the question, go up against a status quo just because it's always been done that way.
Does it need to continue to be done that way? And I think that is the biggest piece of advice I will be carrying on throughout my career is always question the status quo and always ask that extra question for those stakeholders. Because you just never know when someone will go, actually, we don't know why we're doing it that
Jon Busby: way.
The the, that was going to be my next question really Beth was, if you are now, now this market is moving so quickly, right? So now let's look to the future. And let's say if you use one piece of advice you'd give to your future self, what would it be?
Beth Redpath Katz: I guess that.
Don't hold on to the fact that everything's going to change. There are, so I, like 10 years ago, there are no new ideas in marketing. There are no new movements in marketing. Stop thinking that ABM is new. Stop thinking that the way that we evolve our digital media is new. Everything is just a loop.
Just hold on to the fact that there are no new ideas in marketing. The new ideas is how to apply them. I think that's the big thing. I've always liked
Jon Busby: the fact that the best new ideas are just remixes of older ones. Oh yeah. So to come to your point, it's not plagiarism when you research multiple sources.
So really it's just about mixing some of those ideas in. Beth, it's been a real pleasure having you on the podcast today. We've touched on a huge variety of different subjects here. I'm looking at my talking points and I don't think we touched we did go through them, but we went off on some, probably some of the deepest debates that that I've gone through in a long time.
So thank you so much for questioning our status quo as we've been going through today's podcast. And also, bro, thank you very much for your point of view around, this really is about augmenting a lot of what. Great market is all we're doing, which here at together in the tech marketing podcast, we completely agree with.