
Location: Amsterdma
Nathan Bell has spent 25 years delivering transformation programmes inside some of the world’s largest telcos. Now a Partner at Kearney, he spends his time helping companies figure out why their AI ambitions aren’t delivering real value.
In this episode of TechBurst Talks, Nathan and Charles cut through the AI hype cycle that’s dominating boardrooms and analyst calls. Companies are running hundreds of AI pilots, spending millions, and still struggling to turn experiments into production systems that actually move the business forward.
We discuss why most AI strategies fail, how the proof-of-concept trap is destroying ROI, and why companies need to treat AI as a business capability first and a technology investment second.
Nathan also explains where AI is already delivering real value — from internal knowledge systems and customer service to procurement — and why the real challenge isn’t the technology but the organisation itself: legacy systems, unclear data ownership, broken processes, and leadership teams that still treat transformation as a software deployment.
Sharp, pragmatic, and grounded in decades of telco transformation experience, this conversation is a reality check for any executive trying to separate AI strategy from AI theatre.
HOPE IS NOT AN AI STRATEGY
NATHAN BELL
KEARNEY, PARTNER - DIGITAL PRACTICE
SWIPE RIGHT ➡

WATCH ON YOUTUBE
LISTEN ON SPOTIFY
60-SECOND INSIGHTS
NATHAN BELL
FULL TRANSCRIPT
CRA: Analyst calls are filled with C-Suite executives talking about their latest and greatest AI strategies. The problem is most of them are rubbish. Let's face it. They're spending millions running hundreds of proof of concepts, and very few companies are getting a return on that investment. My guest today is Nathan Bell, and he's been delivering transformation initiatives across the telco sector in some of the world's biggest telcos for about 25 years now. And strangely enough, a lot of his programs have delivered a lot of value. So what does he understand that the rest of the stone, let's find out. Nathan, welcome to the Tech Burst Talks podcast. Nathan: thanks for having me, mate. CRA: You've seen a lot of hype in your career. Everything from the.com era going through to 3G, 4G, and 5G. And now we have this next wave of hype around ai. And now you've finally left the telco industry. So first of all, congratulations on that. And you've gone to the dark side, so you're now working as a consultant and you're a partner in the digital practice for Kearney, my first question for you is, this is a [00:01:00] major hype cycle. Is this similar to what we've seen in previous hype cycles, or is there something unique about this one? Nathan: So when we talk about hype cycle, there's, there's always something in there that's gonna be tangible, that's gonna be real in terms of what we experience. I think the common factor here is we are hearing a lot of exciting noise about everything is possible and it'll change everything. I, I, I read an article yesterday where I was saying I think it was like 90% of all white collar roles will will disappear. And I look at some of that and I have to reflect on, okay, that's come from some person who's actually pitching their own software of why it's gonna have an impact. Which then takes me back to the the dotcom era of, oh, there's so much demand for the internet, and that's why all this fibre is required and it'll be huge. That said, I think the nuance that we see with AI is there are some amazing success stories out there in terms of what's actually possible. I think we're forgetting though which is very different to the the, dotcom is that for businesses, everyone needed to be in web. It, it was consistent. Everyone [00:02:00] needed to have an app, it was consistent. With ai, the way that we adopt that will be different depending on the context of where we're at, our organisation our structure, core data, all of these different things. So I think there's, there's clearly an element that is not going to be a hype cycle, but it's the way that we're approaching it, I think is what's giving that feeling like it's another hype cycle, CRA: I mentioned what we're seeing on the analyst calls, but a lot of people are talking a good game. But what I'm wondering is, is there a gap between what they're talking and what they're actually delivering it? Are we seeing people starting to get out of that proof of concept mode? Nathan: it's, and again, this is where we're sort of going around the wonderful fishbowl, right? So before it was to your point you know, what's our internet strategy or broad more broadly, what's our digital strategy? And then it was what's our cloud strategy? And what you're finding in the boardroom is, is they just want that reassurance of do we have an AI strategy? And in most cases, people are saying, well, of course we do. We clearly have an AI strategy. It's perfectly fine. And then where you see the really smart folks are sitting on these boards going, fantastic. Now what does that mean? And I [00:03:00] think that's the delta that I'm seeing when it comes to ai. It, it's easy to experiment. I mean, you and I could sit here and fire up various gen AI or various AI workflow tools and go, wow, look, we've done some things. This is really, really cool. And if we took that to a board, they'd be like, wow, that's amazing that wow, we're the cutting edge. But when you actually go and ask the question of, okay. If you're a business that's now done a hundred AI experiments, how many of those are actually now in production at scale? CRA: And this is one of the challenges because companies and executives are getting disillusioned because they're just not seeing the returns and their investments yet. So where do we go from here? I mean, what do you tell these people when you go in there and talk to your clients? Um, how do you explain to them that this is what you need to do to potentially get a return on these investments and move forward with your ai strategy? Nathan: So, so it's, it's very much in terms of learn from the past to help us prepare for the future. So when we looked at all the different digital initiatives that we were running, the view was that digital would impact everything. Well, it didn't. Right. I mean we talked about, oh, we're gonna have digital in the way that we [00:04:00] drive risk and audits. We're gonna have digital in the way that we manage our people. I mean, I, I, I never forget all the stories around RPA RPA is gonna be the silver bullet for everything, right? But we learned that where it has an impact is where we should be starting first. That, yes, over time, I'm sure there'll be a, a broader impact of ai, but start where it's clearly gonna have a, a demonstrable and a financial impact. And in that sense, then choose where am I looking to invest in. It's great to experiment and it's great to make sure that people can sort of learn from what this technology can mean, but that investment doesn't have to be measured in the tens of millions of dollars. You can very cheaply give people access to AI tools that they can experiment and come up with ideas of what's gonna work or not work, but then be very specific about where are you making your big bets and what are you're going after? Nathan: But the other point I'd, I'd add to that is. From a CFO's perspective, you need to start then thinking Nathan: a, a bit like a vc. Nathan: It's like, I'll give you a bit of money, you show me an outcome, I'll give you a bit of money, you show me an outcome. And most companies aren't used [00:05:00] to that because when you go to it, they say, oh, that's like $5 million. If that's what you need to do, it's like, oh, 5 million. What's the value? I don't know. That's, that's frightening. Nathan: You know? I mean, who wants to go and have that conversation? So the whole approach to this is what needs to fundamentally change. CRA: But it's gonna be hard to fundamentally change this because as an industry, this is what we've always done. We just throw the money at it and hopefully we'll make some money back in the returns. Think about it with 3G. When we had the license auctions in the uk, I mean, Vodafone spent 6.5 billion just for the license, not even for what it cost to build out the network. So we haven't been asking those questions. We did the same thing with 5G. We built it out and had no plan on how to monetize it. If people are starting to look at AI and think, actually we need to have a plan on how to monetize it, I think that's a good thing that we're starting, at least to ask these questions. Nathan: The difference there is when I was making those decisions, I was looking at from a CapEx perspective, and I could say, wow, 20 years. Yeah. Yeah. There'll be some sort of payback when it comes to ai because it's all software, it's all license driven. It's being [00:06:00] felt from the day one that things are being implemented. And so, you know, there was one CFO who was sharing with me that, oh, I'm spending a lot of money on actually going and supporting these teams to go and look at all these different areas. But unlike what I experienced before with IT projects where, oh, it's capitalized, so therefore it's not really hitting my books in the way that I'm presenting it out into the marketplace. 'cause it's sort of underlying, if you know what I mean, it's just sitting there. Whereas now, oh, we're one quarter in, I've got a hundred initiatives. Fantastic. And the payback will all happen next quarter, right? Because it's, it's on the, the, the above the line. No, we're not sure. And I think that's what's driving a lot of this nervousness to your point of we have to start looking at these things in the here and now, rather than just saying, oh, okay, we'll kick it down the road and it'll solve itself. CRA: So we're gonna go deeper into some of the industries later on, but let's just cover this off quickly. I mean, when you look at AI and you're talking to your clients, do you see any low hanging fruit, like things, initiatives they should be doing now where they can get a quick return on their investment? Nathan: The first most obvious one for me, and the safest one [00:07:00] is internal information. It, it's amazing to me when you go and speak to HR teams, finance teams, legal teams, procurement teams, how much time is lost? 'cause they're repeating an answer to the same question again and again. And you might sort of say, well, you know, isn't there FAQs or aren't there sort of documents where people can go and look? Yes. But in any large organizations, those number in the hundreds, if not thousands. And so being able to start from that point is a very quick win and a very safe introduction for businesses in terms of leveraging ai. And I say safe because we're not exposing the customer in terms of data or in terms of the interaction. I'm not asking people to reimagine the way that they work. We're actually just trying to make their lives easier. When you get beyond that, the, the most obvious one is gonna come is in terms of customer service. Very much driven by the fact that we have huge volumes of transactions that actually aren't adding any value. There was one business that we were working with where I was amazed. I think it was like 48% of interactions that were, that were [00:08:00] coming from customers into the customer service desk, were actually asking for updates. No orders, no issues. Just, oh, can I check the status of, of X? Okay, well what does this person do? Oh, they go into this system, they find out the status, and then they inform the person. Can a bot do that? Oh, I suppose they could, but we're not sure about the data. Well then fix the data and some most fundamentals. So that's another key. And the last one that I found absolutely fascinating is procurement. Long tail. When you really sit down with a procurement team, particularly large organisations where there are thousands of vendors, how do you know that beyond the top 30 or 50 that you might be actively managing? What's happening with the rest? Are they delivering per their contract? Are there costs in line? Are you actually seeing the performance versus the metrics that were supposed to be agreed? You have no clue. And then you might regularly or occasionally do an audit of, of certain vendors and then hope that you're gonna get that theme and then say, okay, I'm sure we found the one where, where there was an issue. So I think those areas, and I've shown there's three different examples, right? So one [00:09:00] is a, a very quick win low harm, positive impact. Another one is very clear impact with guardrails to make sure you're not sort of exposing the customer to things that they shouldn't. And another area is areas that we simply probably haven't paid enough attention to. That AI now allows us to go and do that. CRA: You mentioned metrics here and, I think this is important because if I'm the CFO, a lot of times what happens when we're doing a lot of technology investments is we just give money out and people go play with their toys and hopefully we make money out of it later on, but there's never really a plan. But this game isn't really working anymore. So how do you talk to clients or give them advice about how do you actually set metrics on these types of initiatives upfront so we're not just throwing that money away? Nathan: Yeah, I, I, I, and it's such a fantastic question and I'm smiling because when I go and speak to business leaders about it, it's like, oh yeah, because it's gonna save us X million Oh when is that? Oh, 2.5 years from now. So you are asking finance to give you funding for 2.5 years on the hope that that's gonna be an outcome. I said, that's not a metric, that's a goal. If you wanna sort of [00:10:00] it's CRA: aspirational. Nathan: Yeah, yeah, exactly. Exactly. So I think the real opportunity here is in guiding clients is to understand what's the metric for your MVP or for your pilot or whatever you wanna describe it. And if you achieve that, then what's the next metric that you should be tracking? And when you achieve that, what's the next one? And the whole point of this is driving confidence. And this is why I love this concept of a VC model for funding from finance. If I can show that I'm progressively driving the right outcome, then this is great. And sticking with customer service. Everybody wants the idea of having an an AI agent that's answering all of the queries. Do I start with that? Mm, probably not. What do I start with? Okay. How about if I could have knowledge management being dynamically updated and I'm reducing the number of follow on calls that I have to take? Okay, that's interesting. How about, and I can have an agent that's listening into the call and providing the next best action to the human agent to be able to go and get, get rep resolved. Okay. That's interesting. And then I go and look at how I have these AI agents that are responding to more and more of these, these queries. I can demonstrate within months that I'm [00:11:00] driving an impact in the progressive fashion. That because we get fixated on what I call the, the Hail Mary of go long and let's see how this how this lands. We end up spending a ton of money with zero confidence of that's the output that we're going to get, but we see digital firms doing it, so it must be true. Right. That's the challenge. CRA: And no one's selling you something there. Right. It's just, you know, it's always true. Nathan: Yeah. It must be, must be true. Yeah. CRA: I like this idea about treating your internal organization, you know, as venture capital investments instead of just throwing money at projects, you know, make them pitch for it. 'Cause you and I have talked about this before and I'm, I'm not a big fan of large corporates or MNCs doing accelerators or incubators because I think that they always make a mess of it. They throw money at it. They think we're gonna innovate, but they really struggle to bring those solutions and integrate in with the business. So it never really works out. So the idea of actually advising companies or to set up a VC fund and make people pitch for those projects so they actually put together proper business plans. That sounds brilliant. The question is, is anybody actually doing [00:12:00] it? Nathan: Some are starting to, some are starting to. There's, there's a few that I'm quite positive. We've been able to guide on some of these conversations of doing it in small areas. So for example doing it with data. Right. So oh, I've got a, a data initiative I want to drive. All right, so what's the value of that initiative? Oh, we're going to be able to drive favorite topic for marketeers. Hyper personalization. Okay, so we're not gonna get there day one. What does it need to to get started? Oh, I need to drive segmentation to another level. Fantastic. You show me that. And what value that, that's, that's deriving for you so you can drive more targeted campaigns. Fantastic. I like it. What's the next thing? You know? So we're seeing some of that come through, and I think now people looking at that from an AI perspective of, okay, can I actually start to follow some of those conversations in the same way? But I would highlight, and I've shared the folks as well, that there's this perception of silver bullets, right? It's like, oh, so I put a VC model in for everything, and that'll be it. I'm, I'm done. It'll, it'll solve itself. You have to understand that when you're driving these sorts of changes in the [00:13:00] organisation we have to unlearn relearn. Right. So I'm introducing these new approaches, these new ways of engagement in a very pragmatic and phased way. So I might do this for 5 or 6 teams, and then I say, okay, so those are all pretty happy. They're recommending, others are now asking if they can join that format. Okay, I'll do it for another 5 or 6 teams, and so on and so on. And the reason you're doing it that way, if you're driving the cultural change where there's ownership it's much more likely to be persistently successful. CRA: I do think this is quite interesting because it actually puts pressure on teams to actually start delivering value outta these projects but I gotta admit, I do kind of feel sorry for companies right now because there's such a great impact on your business, uh, and your market capitalization because of ai, and I dunno if you saw this in the last couple of weeks, but a lot of the commercial real estate firms like JLL and CBRE. You know, they dropped overnight by like 12 or 14% because people are starting to revalue that company based on the potential impacts of ai. And it didn't stop there. I mean, it's gone into private equity, it's gone into legal firms, the [00:14:00] consulting firms, the analyst firms, a lot of them are getting hammered. And there's some really interesting stories coming out. There's one construction engineering firm, they acquired this 50 person startup for about 370 million. I think it was around there, range three 90. But the idea was that this AI company they acquired would automate a lot of the work that their engineers did, and that sounds great on paper, but when they announced it, the market hammered 'em because if it's doing the work your engineers are doing, what happens to your fee revenue from those engineers? What happens to the professional services fee? So you've basically just cut out a revenue stream by adopting this ai. So I really don't envy the situation that they're in right now because it's gonna be tricky. And if I'm one of these execs, I'm gonna be pulling my hair out. Now these execs are your clients, so what do you actually tell them? How do you navigate this incredibly complex environment that we're in? Nathan: So I think there's two parts to that. We, we've gone through the, the phase of what's our ambition when it comes to ai. What are we, what are we looking to be able to [00:15:00] realise The problem is I think businesses sort of get stuck in this loop of, well, we've said that now. Now do we do, oh, well, we'll buy something. We'll buy something. And that'll validate that we're, we're on the right path. But while all of this is happening, there's actually nothing changing internally. And I, I always share with folks that AI needs to be treated as a business capability first, and a technology or an investment second. And if you look at it as a business capability, it forces the question of, okay, it's a business capability. That means these teams need to own it. Do they understand what the implication is of owning AI in their respective functions? Absolutely not. So how is that going to change? And if, when I look at that and I see the gaps that exist, or the efficiency that could be realised from ai, and then I'm asking the question of, is this something that I can make, something that I can buy, something that I can partner on to actually augment that, then yes, that becomes more believable. But these, these hail Mary approaches of, I'll just buy something and it will materialise Just, it doesn't work. But again, we've been here before, companies that were buying apps, right? And they said, oh, if I buy this app, [00:16:00] it'll be the digital arm of my business. And then that'll go and be the answer. It doesn't material. And what I find amazing is we, we go around these loops and we seem to learn from that. And will you go and repeat it again? CRA: I wanna move on a bit and talk about how AI has evolved, because right now the big thing is talking about agents. So Agen AI is quite unique because it actually acts and it creates a lot of possibilities for how we can put that into our business or what our personal lives. One thing you've mentioned about AI is it's always important to have this human factor considered when you're looking at AI and with agents. So can you explain what do you mean by that human factor with regard to agents? Nathan: So, so there's, there's a few different parts of that. Agents don't just get turned on and go, ah, I'm gonna do x. They're learning in terms of what we've done before. And there are great examples out there, and I'm not gonna mention the company name because I think it's important that we acknowledge that we learned from early adopters. So one example, they were having an agent that was now being responsible for a core part of [00:17:00] their recruitment process. Everyone was super excited. The efficiency gains were gonna be enormous. But all of a sudden it started looking for white men between the ages of 25 and 35 that liked playing golf. CRA: Brilliant. Nathan: Now what's the problem with that? Well, except CRA: we're too old for that already. Nathan: Quite possibly. Yeah. My backs swing's a bit bit rough. But, the point there is we were all surprised or the organization was surprised, like, oh, how could this possibly happen? It must be a technology issue. And then later it was a recollection of, hang on a second, this is learning from our data. What does this say about our own organization and what we're doing or what we, we aren't doing more to the point. And so that's the first element, is that understanding that these agents are learning from everything from our past. And, and it's the same conversation when in the early days of, of Gen ai, even when you were asking for, oh, show me a picture of a leadership team. It was all men and everyone was up and up like, oh, how can this possibly be? The people that are coding these algorithms, they're doing a very bad job of this is the result. And yet it has nothing to do with that. If you take all of the history of images of the last a hundred years, like it or not, the [00:18:00] majority of those images are mostly men. And that's what it's doing now we have to go and put these interventions in place to say, actually, this is what I want you to look at and this is the way that I want you to look at it anyway. So a bit of, a bit of an aside, CRA: But it's a valuable aside because agents are learning from us and what we do, and let's face it, we haven't exactly been angels. Nathan: Correct. Well, and and the fact that that's all in our data. And then we've tended to forget about that. It's like, oh my gosh. And, and you see the same thing now in customer service, which is just amazing. Where people are exposing these agents to basically start representing the business facing their customers and are stunned by the types of responses that it's giving. Well, did no one actually ever go through the, the, the, the history of calls or queries that have come up before and the way people respond. Because if you're in an organisation that has over a thousand customer service agents, guess what? There will be some in there that are maybe not responding in the way that you might like, but no one went back and checked. And so that's what it learns from. The second piece, which is quite interesting, is we plug an agent in and we expect it to then go and [00:19:00] operate within the way that we do in our organization today. And I, I call this the bolt-on of, of agents into the business. But the way that an agent looks at the input and the output is very different the way that we look at it as, as, as human beings. And what's doubly fascinating is if we're asking people to work with these agents. If I say, this is the process, and you get asked what the process is, you might have some nuances or difference of what your understanding of the process is. And I, I've discussed with lots of business leaders and, and first of all they'll tell me, say, no, no, no, our teams are very clear on the process. Are you sure? And if you draw it up on a whiteboard, it's quite fascinating to say, oh he does this part differently, or she does that part differently. Or this function actually thinks this step doesn't actually even exist in, in this process. And if we're now asking an agent to be able to go and start plaing the role of managing some of these workflows, and we then go, oh, it's a technology issue, what do you mean it's hallucinating? Why is it hallucinating? Oh, because when Charles asked the question versus Nathan asked the question, they got a different response. But the task is supposed to [00:20:00] be the same, but is that 'cause of us or because of the technology? The the last point that I'll highlight is the set forget mentality of agents is, ah, I've got an agent. Now I can sit back and it'll just do the job for us. It's fantastic. And yet, it's fascinating to me that, okay, so if you had a brand new employee, if you had an intern, would you say there's the financial records? Can you now just go and manage that for us and make sure the spends gonna be handled correctly? You would never do that. So why are we doing that with technology? I, I have no understanding. So I think agents are incredibly valuable to your point of, of the, impact that they can have. But this, this concept of oh, I, I set up the technology and it runs like I have an app or, or like I had an RPA is is completely so far off the reservation. It's ridiculous. CRA: But I think this is one of the challenges we have with AI in general. We just assume that it's gonna be right. And let's face it, it isn't. I mean, I don't trust any of the LLMs because I've been burned too many times where it's hallucinated and given me bad information. And if I would've taken that [00:21:00] information out publicly or used it at an event, it could ruin my brand and reputation. Nathan: Yeah. CRA: So I've really become very cautious and I've vet everything a few times 'cause I just don't trust the LMS right now. And with agents, it creates a whole new set of challenges because we've talked about it from the technical side, but what about from the human and the people side? You know, if I'm working for an organization and my CEO's out there saying that. We're just gonna deploy all these agents. I'm thinking, well, where's my package that I must be getting let go pretty soon. And you told this story recently, about a CEO who had a little bit of an issue with how he pitched agents internally to some of his teams. So why don't you tell us about that one. Nathan: Yeah. So we were basically talking with the CEO about driving his ambition and making sure it was clear to the team. And I got invited to sit in, in his town hall where he was gonna present all of these great things that they were gonna be doing across the organization. They had all these leaders come up, present different AI use cases that were being executed, and the impact that it was gonna have on efficiency and productivity. And all the executives are all high fiving each other. This is really exciting. [00:22:00] And I sat in the back and one of my colleagues who spoke that local language I said, you know, what are those two people in front of us talking about and said, oh yeah, it's a bit awkward. It's okay, well what's going on? Oh that person realizes that that demonstration was his job and the next demonstration was his own job. And so they're wondering at what point should they be contacting hr. Anyway, after the session caught up with the CEO on the way out and he said that was good. Right? Very exciting. You know, as you see, we're a leading company digital first, and now we're looking at AI first in the same way. And I said, yeah, I know there's, there is intent and then there is how it's received. And I think something that a lot of leaders sometimes miss is that, oh, I got my message across, tick in the box. My, my job is done. But it's always down to the, the reception of that message as opposed to you just being able to get the message out there. Surely enough, a couple of weeks later, I asked, how's it going? And he said, yeah some of your observations might have been right. One of those observations I'd made with him as I was walking out the door, which he dismissed out of hand, [00:23:00] was HR is gonna become your most popular function in the next few weeks. And at the time he was like, I don't understand. I said, look, people are really concerned now about what you said and the way that you've articulated this. And he said, yeah, people were asking about once there were package going to become available because surely they're about to be made redundant based on what they've just seen. I love it. He then had to go back function by function and sit with people and say, no, no, no, I want you to work with ai. And unfortunately the trust had been lost and it's taken him months to be able to garner that back from the, the teams in terms of what they're doing. But yeah, it, it, it's a great learning for us that the technology is great, the opportunity is great. But it all comes down to perception. I mean, there was another business lead I was talking to where his team had unequivocally told him there was no value from AI in and this is a, another was a telco looking at the network side. There was no value of AI in the network. Absolutely not gonna happen. And I said to him, look, can you, do you have talent in your teams that you're really confident being retained for the next two to three years, or even ideally five years? He said, oh, absolutely. We've got a high performers and they're all in this [00:24:00] retention programs. Fantastic. Go and ask them what they think, the opportunity with ai, knowing, telling them that you know, that they're part of the future of the firm. He did that and he came back saying, oh yeah, all of a sudden we have all these ideas of how we think we can use ai. We've gotta get over this piece of, of the human factor of understanding. I'm worried about me, then I'm worried about my team, then I'm worried about the company. The fact that senior leaders have this view of, oh, I'm looking at this from the performance of the company to shareholders. The average Joe or, or Diana, Frank, Sarah, whatever you wanna call it, isn't thinking about any of those things. They're thinking about. I've gotta put food on the table for my family. If I don't have a job, what am I doing next? And we just don't reflect on those things enough. CRA: We like to talk about this from the technology side, but when you look at transformation, I mean, I know you agree with this as well. I mean, it's all about the people. You can't just deliver and implement a piece of technology. That's not a transformation. You need to bring people on this journey with you, and if you don't, it ends out just failing. And that's why what we, it's a stat at recently, between 70 and [00:25:00] 85% of digital transformations fail. But a lot of companies go into the assumption that if I delivered a a piece of software on time and on budget, that's a successful transformation. That's an implementation. It's not a transformation. When are we gonna learn how to actually not forget about the people when we do these major transformation projects and bring 'em along that journey? Nathan: It, it's interesting you mentioned that, right? There's an assessment we run with business leaders to see how ready are they to actually scale their AI ideas and ambitions and pilots that they're doing. And one of the questions that I ask is around change management. When's the last serious change program that you ran within the business? How successful did you deem it? Who were the champions of driving that change? And the question I always get back is, well, why is that relevant? This is just the technology. And then when I go and start sitting with the, the head of strategy, the CEO, the CIO hr, et cetera, saying, so where is HR in your governance of driving your AI programs? They're not, you know, it's not, not an involvement, [00:26:00] but the technology's the easy part. If you don't have driving that change management program, it's not just that the HR person needs to drive it, but they need to be the one that's sort of quarterbacking it, of a who's doing what, where, and when. Otherwise this is never going to succeed. And so when you start talking to them about, okay, I'm changing people's roles, I'm changing the operating model, I'm changing potentially the hierarchy of, of the way this works, removing middle managers because I can actually have people managing agents rather than having to manage large organizations. That is fundamental change. And if in your response to me you said that, oh, only our change only succeed 50% of the time and they tend to be left half finished, then don't invest millions and millions in ai 'cause you'll never see the value. And, and it's a really difficult conversation, but it needs to be had. CRA: I want to go back to agents again because something's come out in the last few days, which I just, I love, I think it's so interesting and I'm gonna have to look up my notes a little bit 'cause it is a little bit of a long story. So apologies for that. Last week an AI agent tried to contribute code to a popular Python library that [00:27:00] gets about 130 million downloads a month. And one of the moderators of that library rejected it because they have a policy where they only accept code that's written by humans, not by agents. Now, what's funny about this is the agent then researched the moderator, found out who he was, got his personal information, and wrote this character assassination piece on him and published it. Now, that's bad to start off with, but it gets even worse because now the guy who's the head of that company who had the agent. Sam Altman, from, uh, OpenAI Chat. GPT has called him a genius and hired him to basically lead their agent programs for personal use. And that means he's gonna be developing the agents that we're gonna be using for managing our emails, our negotiations, and some of our business processes. What are we getting ourselves into? Nathan: So, so there's, there's two parts in there. What you said, which I think is quite fascinating, one is the agent learned how to be an asshole. It actually never does, right? All it, all it [00:28:00] learns is, oh, so if I write certain things or say certain things, there's a higher likelihood that my content is actually going to get approved. So all it's doing is running through saying, what are the scenarios of how I can get the right answer? Because otherwise it has to go back and say, I'm sorry, I couldn't do the task. And all these algorithms are driving to this outcome of you will complete the task. This is what you are requested to do. CRA: I misspoke. It's almost like a decision tree. It's like we do things. Yeah, Nathan: exactly. And so therefore it's trying to find what are the different avenues that it can take to get there. The second part of what you said is super interesting is, oh, we're trusting in this person that's gonna create these agents that's gonna solve all this for us. This is the part that I think even most companies are looking at in the wrong way. When you are taking on an agent, the provider's not responsible. You are now responsible, right? It, it's the same way as if I put someone at the door who's going to be checking people coming in. If I haven't vetted the kind of person, if I'm not including them as part of my audits to go and see is that person at the door still doing the right job that I need them to be doing, and making sure that things aren't happening that shouldn't be happening that's on [00:29:00] me. That's not on the company that I hired a security person. I can hold them accountable in some form, but I need to take their ownership of my own environment. And I share that because we, this this concept of human-in-the-loop, right? And people are saying, oh, we won't need human-in-the-loop because there'll be AI for ai, right? Of I'll get another AI tool that'll monitor the AI tool and we'll, we'll direct it. And I'll just give you another example to, to your point, which sort of put that on, on steroids a company had, and this is another business that I've been engaging with, had a AI tool that was generating marketing campaigns because it was taking a long time to create all these marketing campaigns and they said, oh, we'll get AI to do it. We can do it a lot faster. Absolutely true. Factually correct statement. Oh my gosh. It's generating way more campaigns than we can, we can consume. I know we'll generate another AI bot who will vet the campaigns and only put forward the ones that are meeting certain criteria so that we can, we can drive that for the first two weeks after that point. Everyone's high fiving. This is fantastic. The number of campaigns has gone down and the quality is super high. So you [00:30:00] know that the first agent that's creating all of them is now being filtered to give us this really great quality. Then all of a sudden the campaign started shooting up again and up and up and I was like, hang on a second. What's, what's going on here? What everyone had failed to realize is the first agent understood now the role of the second agent, and was communicating with the second agent to see how do I make the second agent look like it's achieving what it's meant to while also accomplishing my goals? And when they went through the logs, they could actually see there was a conversation back and forth between these two agents to get to an answer. Now, what that tells us is if I don't drive a human-in-the-loop part of this conversation ongoing, regardless, I mean, I, I now speak to business leaders and say, your AI for AI is fine, but the second AI should never be able to talk to the first AI because of what these scenarios are happening. And then you'll hear, oh, no, but don't worry, the technology will get better. So as the technology take on responsibility, I can't think of any technology firm that has turned around to an organization. I am solely responsible with. Success or failure of your organization [00:31:00] will never happen. And this assumption that we think it will is just fundamentally wrong. CRA: I'd like to move on a little bit and talk a little bit more about some of the challenges that we're gonna be encountering as we try and implement a lot of these new AI solutions. Let's face it, most large MNCs are burdened with legacy systems. A lot of times they don't have the right skill sets in place. They don't have the right culture in place. To implement these solutions and deliver the transformation or the change programs that are required to get the most out of them. Now, I'm not the expert. That's just my personal opinion. Are you seeing the same types of challenges, and if so, how are you helping clients address that? I mean, besides hiring you to come work for them for a while. Nathan: if I'm lucky enough, yes. I think the, biggest part we're, we're sort of seeing is they're making comparisons between themselves and digital native firms. CRA: Mm-hmm. Nathan: And they go, well, if they can do it, we can, we can do it. We have all the resources. So, you know, we have a a data lake we have the cybersecurity systems, we have a whole bunch of software engineers. It should be fine, right? [00:32:00] This is okay. And then when I sit down with 'em and, and try and highlight some of the fundamental differences from a business and operating model perspective, not from a technology perspective between digital and themselves, the gaps start to appear. So I'll ask the questions of who owns data. And the first response I always get back is, what has that got to do with ai? I said, and that's the second problem. But let's start with the first one. Who owns the data? Oh, it's the, the head of data and the IT guys. No. If you create the data, you own the data, and if you can get to that point, then your stepping stone to who is gonna drive AI becomes a little bit easier. But if you can't even address that, you can't get to the next. Then understanding to what we talked about before is the operating model changes, right? So, so who's actually driving and owning them? What operating model changes? So, well, if you have agents, agents are acting as employees, employees are having to collaborate with agents contributors never had to worry about leadership and having to lead the lifecycle of agents. So all of these fundamental changes and challenges. And so having to actually guide them through that, [00:33:00] actually the technology's easy. I mean, there's like over a 100K AI firms out there today where I can go and find a niche piece or a very broad capability of whatever I'm after. The, the, the struggle is the business's ability to actually adopt it. So the, the, the big piece of, of what I'm seeing is speaking with these leaders to understand if you're not ready to drive the transformation in your own firm, then anything you do is only ever gonna be a veneer, CRA: which is a slight problem, I guess. Nathan: Slight problem. Yes. Well, again, the history, right? I mean, we, we were there. Digital transformation was successful. Absolutely. We have an app we have some RPA I have some automated workflows and some self-service features for customers. Thanks very much. CRA: So when you look at a lot of the industries, um, there's a lot of things they can start leveraging AI for, whether it's gonna be for operations, employee engagement, customer engagement, customer experience. So, but there's a ton of risks around all of these different things, and I keep going back to this whole idea about the ethical use of ai. And we keep assuming that these things are just gonna work and it's gonna be [00:34:00] seamless, and we're not gonna run into problems. And that's just not the case. It doesn't mean the agents will be malicious, but they're gonna learn from us. And we've already agreed that we're not always good people. So how do you advise people about this? What do you tell them they should do? I was doing some research into the media industry the other day. And I was looking at how, and everything's switching to streaming, so they're generating content in new ways. But if you're leveraging an LLM to generate a new TV show, for instance, how do you know that you're not leveraging proprietary information? So, I mean, that's one of the challenges, but then you have to cast actors. So how do you know you're not leveraging AI to. Cast these actors, but it's actually showing bias in how you're recruiting these actors. Because let's face it, if you look back over the last 50 years of movies, there's gonna be a lot of older white men in there, and it's not going to be as racially diverse or gender diverse as it probably should be. And these things just go on and on across this whole media industry. How do you even start to advise clients on how to manage that going forward? Nathan: No, and, and I don't think they're ready for it because it's the [00:35:00] implications that they become quite concerning. Even marketing teams are who are using AI for image generation and questions are being asked of, well, hang on a second, that looks. Quite similar to this piece over here. Oh, actually that's what LLM learned from in terms of being able to create image that you want. Okay. So do I, am I now in breach of copyright to that person or is the LLM owner in breach? How does that even work? Right. So, so all of these questions are now starting to come up, which is making lawyers, I guess, excited and nervous all at the same time, depending on what side of, because CRA: this might be the new use case they need. They can generate fee revenue just by suing LLMs. Nathan: Could well be, could well be CRA: I'm really curious to see how all this is gonna play out in various different industries, but you look at a lot more industries than I do. So I'm curious your opinion, are there any industries in particular that you think will really be able to leverage ai? And make the most out of it in the near term. Nathan: That's a fascinating question. So I think when I look at who's leading this space, and this might sound weird to say, but I'm seeing consumer goods are doing a hell of a lot more [00:36:00] successfully on AI than what others are. And I think the driver for that is, is just the scale, right? The, the scale of workflows data breadth of products, particularly where you've got these consumer firms that are having a portfolio, and by portfolio I'm talking hundreds of brands that they're trying to look at how am I managing all of this? How do I make sure that my 97th, 98th brand is as successful as my 1st 2nd 3rd sort of thing. So I think for them, the, the use case of this is just at a whole other level. Whereas I think for banks and, and increasingly now for telcos as well, it's very much on the protection side of thing, right? So how am I protecting my customers from themselves even whether they realize it or not. Cybersecurity again both of these industries are having to be at the forefront of that with what they're seeing. 'cause the new battlefield of cybersecurity with AI is, is rapidly coming to the fore. So I think from an at scale perspective, I'm seeing that much more on the consumer side. Obviously, you know, I [00:37:00] could, it's easy to say digital and that's why I didn't mention that. 'cause I think that's your logical go-to anyone that's a digital native firm, it's already proving a lot more successful with ai. But for those that aren't there's a lot of consumer goods companies that I'm seeing are doing some amazing things. The struggle that they're having is the easy parts where they could demonstrate the value of ai, whether that's from a marketing perspective, whether that's from looking at a supply chain analysis perspective. Now how do they go and scale that across other areas? Is, is proving problematic. Because then the investment versus the scale of impact isn't quite looking in the same picture. But yeah, it's, it's super exciting to see what's, what's possible and that maybe that's the case of of what needs to be looked at next. If you stitch together all these industries and you would look at security and you would look at customer engagement and you would look at portfolio management, you're probably gonna get what the future of a company could look like. But they've all had to prioritize based on what's gonna have the biggest impact for them. Right. So, huge possibility, CRA: But unfortunately, most companies don't prioritize security. There's all those little data breaches we keep hearing about. Nathan: Well, [00:38:00] yeah. Yeah. They don't have a choice. But I, I think, and there's a, there's a really great connection back to the AI piece, and I was running a, a panel session a while ago on this, which I thought was fascinating. The EU AI Act. Great intent. Yes. Really, really good intent. And when I was on this panel, I had three AI firms that were super excited to come in and highlight to me, oh, no, no, we're absolutely, absolutely compliant. So how, how do you, how do you do that? Also, we basically run an audit of our platforms once a year for the, for the customer. Said, okay, that's really good. So what happens for the other 11 and a half months of the year? That's when the, the client's at risk. And I said, oh, no, no, no, of course. So we, we, we work to manage that. Well, but how? Right. No, no one's looking at the security from this AI perspective. And then when it got super fascinating, I started asking the question of, and what if they start customizing this? Oh, then we're no longer respons. How many firms customize software? Pretty much. Most of the ones you and I know, right? So imagine what that then means for these businesses going, okay how protected am I really in this act? And how quickly will my vendors all turn around to me and say it's on [00:39:00] you? What was I think Air Canada, had had the agent where they were trying to blame the vendor and, and say, oh no, no, it's their fault. And then they tried to blame the customer and say, oh no, it's their fault. CRA: They did worse than that. They actually sent their lawyers to small claims court to fight this. So instead of paying the thousand bucks, they probably spent 50 grand on lawyers at least and what they argued was absolutely brilliant. They said the LLM thinks independently, so therefore it's an independent entity, which means we can't be held Li. For its actions. I mean, they spent 50 grand on lawyers and got a lot of bad publicity outta that one. Nathan: That should have been a precedent for everybody else, but no one's talking about it. CRA: I want to go back to Telcos aI can add a lot of value inside of telcos. Theoretically, it could be around their operations. It could be about network infrastructure management. It could be about how you manage customers and the customer experience. If you were back in a telco. Now, if you're advising a telco, where would you tell them to start? Nathan: Look, it has to be customer service. That's just the one that jumps off the bat. I think the, the reason I I see [00:40:00] there is when you go and do ridealongs with customer service agents. What bores them to tears is answering the same question again and again and again. And how often does that happen? It, it, it's insane. So why that wouldn't be able to be addressed by ai, no idea why every telco isn't sort of saying, oh yeah, by the way, AI first, and, and, and human second for these category of, of interactions. The other big area that I see and some of my tech colleagues might sort of be a bit concerned by that, has to be it Telcos have a huge reliance on systems integrators large resourcing vendors, BPOs, however you wanna refer to it as. And so in that space, you have to go and ask yourself the question of, if you are not using AI in it in your own organization, I guarantee your vendors are, and what are you doing with your vendors to make sure that you are getting part of the benefit of that AI optimisation that they're already leveraging? So those would be two off the bat. And, and the last one, which has always been a, a pain and challenge for me is on billing. We do sample billing scans to [00:41:00] see if these are accurate or not. When you are sending out millions upon millions of bills to customers you're working on the assumption that yep, this is all accurate and correct. And there was one client I was talking to, which I couldn't believe they have 10 different billing systems and they're sending out, I think it was like 30 to 40 million bills to customers. And their view of accuracy is how many calls do they get into customer service that they have to go and correct that they've actually been picked up. It's, it's mad, you know, so the fact that we now have AI can scan through all of those bills, validate is there a change? How do I proactively inform the customer of why there is a change, but also understand, okay, I've got a systemic issue here, or it's an isolated issue, move on. Huge opportunity. CRA: So you've spent a lot of time working on major transformation projects in these big telcos. But now you have the luxury of actually advising companies from the outside. So what are the lessons you've learned over the years from driving transformation that could help your clients, whether it's AI transformation or even just a digital transformation, what [00:42:00] are your key lessons learned from your telco experience? Nathan: The first, most important aspect I would highlight is people what's the engagement of the organisation And I'll never forget this in my, in my last job is I explained at the beginning of the transformation is we're gonna go up this mount of peak hope or peak stupid for those that remember that model. And then we're gonna wrap down into the value of despair, and then we're gonna come up onto the plane of sustainability or persistent value. And I'll, I'll never forget every few months I would get asked, so, so we're in the valley, right? 'cause in the valley it's feeling, it's feeling really bad. No, no, no, no, no. We're still climbing the peak. Oh, what now? Oh, I'm not liking this. You are not liking this. But the rest of your team is still super positive that this is all easy. And just implement a few things that'll happen so that people lens is so incredibly important to to, to get right. The second aspect is forgiving the past. There is so much interrogation of, well, why did we do it that way? And the moment that a senior executive starts saying that to a team, everyone backs off. Nobody actually wants to be the person. [00:43:00] Oh, hang on a second. Okay, so they talked about the past. It's an issue. Okay cover it up. Everything. You know, we, we don't wanna be causing problems here. So I think that's the second. The second piece, and thirdly progression over perfection. I've now done greenfield transformations brownfield transformations hybrid combination of the two. And if, if you have a CEO who's willing to do greenfield and is gonna be there long enough to see it from start to finish, then that's great, right? It's a huge investment and you can realize some amazing outcomes. But most people are very concerned of, okay, my tenure versus, what are you going to achieve? What does it actually look like? So doing it therefore much more on a progressive basis. And I think a lot of software these days gives us that, that flexibility. Even with the AI piece, you know, we look at AI as just another layer of the new architecture. It gives us that, that flexibility. So identifying what I can progressively start to change. And even to the point by the way, and this was a fascinating conversation. Everything I do to drive a digital [00:44:00] outcome or an AI outcome, maybe the interim, I actually need more people, not less. And that's okay. 'cause if having a few people that are filling in the gaps to do a handoff, because I can't focus on that change right now 'cause I'm driving a significant change somewhere else. That's fine, but it's, it's where's the outcome that we're actually getting to? So I'd say those are the three biggest things. Making sure that the, the people are engaged and, and are driving and and connected with it. Understanding the, the, the journey that you are you're actually going to be on in terms of actually getting there and this forgiveness piece. And don't, don't interrogate the, the past. What happened, happened. What we do differently is what matters the most. CRA: I like this people angle because if you go back a few years ago, everybody's company mission statement would say something like, you know, our people are our most valuable asset, yet we tend not to treat them that way. And I think it's important right now because what's happening with AI is we're making people fear for the jobs. Whether it's gonna be through automation or agents taking over [00:45:00] processes, which they might be doing before, we've just forgotten that we actually need these people to help run our businesses and to implement these solutions to transform our business going forward. Nathan: and, and it's because we look at the big technology firms who are making all of these huge layoffs of resources under label of ai, but no one actually goes and explains of, well, what thing of actually taking over, what, what's materially changing here? And I love the, the journey that Clara's been on, for example. I thought they were way over ambitious in terms of what they were going to achieve. But now they've stepped back from that a little bit and said, you know what? I think we over indexed. I think we went too far. Now we need to look at where we have people that are adding value versus where they were just acting as glue. And I think that's the, the thinking to have. But when a lot of the activities of glue we think about, we actually tend to have more of reliance on third parties to do those things anyway. So when we are reshaping an organization, it doesn't have to mean that I'm gonna be culling a whole bunch of people because these big tech firms, they realize that the software engineers were a core part of, of their development and growth. They didn't wanna pay a third party [00:46:00] for that, so they, they hired them. CRA: Mm-hmm. Nathan: Now they're reaching a plateau of, I'm not driving as much change in and evolution of these systems. So they did in the past, and now I can have AI helping me on some of that going forward, fine. But that doesn't simply mean that any traditional businesses can now go and replicate that model. We are far away from that. So yeah, I, I just think we get caught up in, you know, being back to the very beginning of this conversation, we get caught up in the hype of what we see Yeah. Others doing. But when you step back from that and understand the part that some businesses on, particularly digital versus other organizations and other industries is reflecting in terms of, okay, how do I drive the progress and the value from AI that my people can work with AI to get to a better business outcome as opposed to making these big, bold statements that, oh yeah, yeah, I'll, I'll remove that function. I will halve this size of the organization and hope I get to the outcome. Hope's not a strategy. CRA: you've been in this industry for a long time, and I'm just curious, like what motivates you still today? What keeps you waking up in the morning going, I really love trying to drive [00:47:00] transformation and solving these problems. Is it just because you're not that good at golf? Nathan: I'm definitely not that good at golf. I think, and I gotta get back to my writing. I used to write a lot and I haven't written for a while, but. The thing that I love most about technology is the interaction of people with technology. CRA: Yeah. Nathan: I, I, I love it when I get to do workshops. So I do a lot of board sessions on explaining how AI works and what it means in, in their teams and really hands-on experiences and I love giving them that opportunity of how do they get the actual value from it and seeing them eyes light up going, oh wow, I didn't know I could do that. And then learning what's gonna change for their job. I think that's really exciting. CRA: And this is unfortunately what most of our industry has forgotten, that technology exists to deliver human outcomes. Nathan: Exactly. CRA: Alright, so you're ready for the rapid fire questions? Rock Nathan: and roll. CRA: If you had to bet your house on which Telco is gonna generate the most value from AI in the near term, which one would it be? And I'm asking you this because you've worked for half of them. Nathan: That's really, really [00:48:00] harsh. So I'm actually gonna say my previous company in M1. And the reason I would say, and I think it's more as a category of Telco M1 is one of the few companies that's really pushed all on digital and turned off all legacy. And for any telco that can actually state that they've turned off all legacy, I think they're the ones that are going to get the greatest value of ai. 'cause they're the closest to a digital native firm. CRA: that's an interesting perspective because they're not an incumbent, and sometimes we always think it's gonna be the big players that'll drive the most innovation, but sometimes it's those tier 2 or tier 3 operators that can actually leverage these things in the near term to drive the most immediate benefit. Nathan: And we saw the same thing with MVNOs, right? When MVNOs came on the scene. Why were they so successful? Because they were able to drive more over the top rather than having to get into the weeds of the engineering side. CRA: and they've also mastered segmentation, which is something that the operators have been pretty poor at over the years. Why would we give somebody something that's tailored to them? We're the monopoly. They should just buy from us. Okay, next [00:49:00] question. Best telco innovation that you've seen that you haven't actually delivered, because I don't want you to just use one of your examples. Gimme something else that you've seen in the industry that really impressed you. Nathan: That's probably fair. Best telco innovation that I've seen. I actually have been really impressed with T-Mobile's local customer service that they're doing in the us. So they broke away from the, the traditional views of, or everything central to everything being local, everything being much more personal and then being able to connect with the problems that small businesses are, are facing in their particular area. I think that's amazing. I think it's incredibly bold. And I think in where everyone's trying to reduce costs, they've turned something that is a cost into an asset in their firm. CRA: But if they could start measuring, if that actually reduces churn, that shows the value right there. So that's a very interesting one that I wasn't aware of. So thanks for that. Next one, biggest telco myth that needs to die. Nathan: Oh God. I'm, I'm pausing 'cause there's so many, I'm thinking of the biggest telco [00:50:00] myth. The one that comes to mind is we can't actually change. And the, the, the point is they try lots of things, but they keep falling back to the way they've always done it. So what, why do you have so many billing systems? Why do you have multiple CRMs? Why do you have multiple apps and, and web front ends that people are going to? 'cause we, we can't push through with making the, the, the core of the change we're not comfortable with. I might have to make this a little bit worse before it gets better. CRA: Huh. I'm actually disappointed in myself right now 'cause I didn't think of that. I actually have a list here of what I thought you would say and that didn't come up at all, but it's very good. So I'm gonna give myself an F for this portion of the quickfire round, but now let's go on in a few words, where are telcos today and where they need to be tomorrow? Nathan: Telcos, are at A crossroads. I think they have a huge opportunity but they're at a crossroads because. We fudged our way through transformations. We got the tick in the box [00:51:00] of being or doing digital less so being digital, but definitely doing digital. And now we're in a very difficult place to say, okay, now I need to better go and drive. AI at scale. It, it's, it's, it's an expectation I think that the market has on telcos 'cause they're the closest to these other tech firms that we look at and go, oh wow, you know, all the technology that happens within telcos. So where they're at is at a, is at a crossroads. The choice that I think that they need to make. Are they going to be adopting true transformation change? Are they brave enough to go back to the board and saying, Hey, we're gonna go through another iteration of change, but this time it's not so much about introducing new technology, it's about fundamentally changing the way we work how we collaborate and how we serve our customers. I think that's the biggest challenge that they're gonna be facing into. CRA: Well, hopefully they can, because let's face it, there's a lot of room for improvement. So the last question is actually a series of questions, but it's something different. It's about me giving back this time. You see, I appreciate that you've spent your time coming here today and you know, humoring me with all [00:52:00] my questions about AI and where everything is going. But I think I can give back. I think I can help you with some life changing decisions. So you're from Australia Nathan: I am. CRA: and you live in the Netherlands, but at some point you're gonna have to make a decision about, well, do I stay in the Netherlands or do I go back to Australia? I'm here to help you with that. So what I'm gonna do is give you some options, and the first option is gonna be the Australia example. The second one's gonna be the Netherlands, and you get to pick which one you prefer and hopefully will help you with this life-changing decision. Nathan: Okay. CRA: So consider this my public service for today. Nathan: Okay. CRA: First one's easy, surfboard or bicycle. Nathan: Bicycle. CRA: Aussie rolls, football or hockey. Nathan: Aussie rules. CRA: Good man. Good day mate. Or Ho Hoy Nathan: Good day. CRA: This one's pretty easy. Barbecue on Christmas day or four months with no sunshine. Nathan: Could that be a choice? Uh, Barbecue. Christmas day. CRA: Yeah, that was pretty much a given, but I just had to bring it up because it is [00:53:00] February and let's face it, it's dark and gray here, and I miss sunshine. Nathan: By the way, I, in the Netherlands. I still do barbecues on Christmas day. Yeah. CRA: So now we move on to the important part because we're gonna talk about food and drink. So meat pie or bitter ballin Nathan: Bitter ballin. CRA: Prawns in the Barbie or herring with onions. Nathan: Prawns. CRA: This will be an interesting one. Vegemite or Hala. Nathan: Hala. CRA: Ooh. Okay. That's one I didn't predict. Next one, Tim Tams or Nathan: Oh, that's awful. I love both. Tim Tams. CRA: Forex or Heineken. Gotta move into the drinking section. Heineken. Shiraz or Jenever. Shiraz. Fast food, hungry Jacks or fbo. Nathan: Whoa. Fabo. CRA: The last one's the most important because it's about health and wellbeing. And more specifically, it's about the risk of an untimely death. So in Australia, you can be eaten by a shark, attacked by dingoes, stung by a boxed jellyfish or a blue ringed octopus. Bitten by a [00:54:00] redback spider or an eastern brown snake, which by the way is known as being grumpy, which I don't get why a poisonous snake has to be grumpy. You could also drown by being caught in the rip. You could hit a kangaroo while driving p to death by mag pies in spring, which is always fun or attacked by a drop bear. Or are you more afraid of death in the Netherlands from being run over by a cyclist while just crossing the street? Nathan: Uh, give my size. I'll take the cyclist. CRA: Good man. I agree with you on that one. Lemme just tally this up. I'm an idiot. I gave you an even number. You ended out six and six. I should have made this odd. I thought it was gonna be heavily weighted towards Australia and I got that one wrong. So I am very sorry. I do appreciate you coming on today, but I'm not gonna be able to help you with that life changing decision. We'll Nathan: figure that later.









