top of page
TechBurst Talks Ep 56 podcast with Bernard Leong breaking down AI buzzwords and making sense of jargon.
Subscribe to the TechBurst Talks Podcast on YouTube
Watch and Listen to TechBurst Talks on YouTube
Watch and Listen to TechBurst Talks on Spotify
Listen to TechBurst Talks on Apple Podcasts



Founder, Dorje.AI
Host, Analyse Asia Podcast
Location: Singapore

Bernard Leong joins TechBurst Talks to strip the bullshit out of AI buzzwords.

In this episode of TechBurst Talks, we cut through the AI hype and decode the endless stream of buzzwords—AI, ML, DL, LLM, SLM, RAG, ChatGPT, Custom GPT, NLP. What do they actually mean, and why should you care?
 
Bernard Leong, one of Asia’s leading AI experts, joins Charles Reed Anderson to break down these concepts in plain language. Bernard brings the theory, Charles brings the practical, and together they make sense of AI—without the bullshit.
 
From real-world applications to ethical questions, productivity gains to future trends, this episode brings clarity to the chaos of AI jargon. Whether you’re a business leader, a techie, or just trying to keep up, this conversation will give you a solid foundation in today’s AI landscape.

THE NO BULLSHIT GUIDE TO AI - BERNARD LEONG

Consulting Services
Subscribe to the TechBurst Talks Podcast on YouTube
Watch and Listen to TechBurst Talks on YouTube
Watch and Listen to TechBurst Talks on Spotify
Listen to TechBurst Talks on Apple Podcasts

WATCH ON YOUTUBE

LISTEN ON SPOTIFY

60-SECOND INSIGHTS

BERNARD LEONG:

ANALYSE ASIA:

DORJE AI:

LinkedIn Profile of
LinkedIn Profile of Dorje AI
Official Website of Dorje AI
LinkedIn Profile of Analyse Asia
Official Website of Analyse Asia

FULL TRANSCRIPT

CRA: [00:00:00] Bernard Leong, welcome to the TechBurst Talks podcast. Bernard: Thanks for having me on the show, Charles. CRA: I wanted to do a podcast to demystify artificial intelligence because there's a lot of buzzwords out there, and people talk about them, not many people understand them. But I ran into a problem because I'm not a technical expert, but you are. So how about this? I'm gonna have you explain your background and then the audience will understand why I chose you to help me demystify Bernard: So first I'm going to stay being humble and to tell you I'm not an expert in AI, but I've worked in the area for many years. So I've I have my PhD as a theoretical physicist in Cambridge University. Then after that proceed to work as a post doctoral research scientist in the Welcome Trust Sanger Institute, which is the home of the Human Genome Project in the UK. At that point in time, I was working on, early machine learning algorithms like supervised and unsupervised learning, specifically for searching what today we call mRNA targets in stem cell regulation and exonic splicing. Forget all the [00:01:00] terms. What exonic splicing is due is that it actually causes a few types of rare diseases called cystic fibrosis and there's actually treatments now for these because we found the targets on how we can stop them. That open source work that we did, we thought mRNA vaccines will come probably 40, 50 years later, but it came 20 years earlier thanks to COVID. It becomes the basis for companies like Modena to develop their mRNA target search using machine learning and they took it to production level tech. And this is all pre deep learning era, which we will come to talk a little bit on. Subsequently, I went to the corporate sector. I was the CDO for SingPost, and did a lot of projects, including drone flying that uses a little bit of machine learning, and running the retail business. And then when I was in Airbus, we used a combination of satellite imaging, drone imaging, machine learning to help insurance companies to do disaster recovery insurance. And then after that, I joined as the head of AIML. for Amazon Web Services, to cover the entire Southeast Asia. I took that business [00:02:00] from six digits to eight digits, and actually dealt with almost every company, be it startup unicorns, multinational, family owned conglomerates, on all kinds of AIML use cases. And then, after that, I joined Warhub as a CIO. Essentially, now I finally be able to get back to my startup roots and started Dodger AI as a founder of a enterprise AI startup. CRA: And that means you're not an expert, so I'd be scared to find out what the qualifications to become an expert would be. Bernard: Because AI has changed for the last 60 to 80 years depending on when you want to pick its origin date, I'll peg it back to 1959 first, which is where the first conference started the term AI, right? I think that there is a lot of development in terms of the machine learning algorithms, and I think that the transformer that kick off ChatGPT is not the be all end all of today. There are still a lot of things that Transformers can't do. Like, for example, we're still trying to work out whether it can do reasoning. The field is moving so fast that [00:03:00] whether you are in industry, whether you are in academia, you have to catch up. Unlike a lot of other fields of research, machine learning is the only one. that the industry is leading. Google came up with the transformer paper, not some university of Stanford, Cambridge or whatever. This AI innovation, it kicks off by being very practical and use case driven before its actual theoretical meaning of what these models are actually came about. CRA: Let's go all the way back to the beginning. Remember now, we're going to be talking about this for the generalist as well. What is artificial intelligence and why does it Bernard: matter? Okay, artificial intelligence is the simulation of human intelligent processes by computer systems, including learning, reasoning, and self correction. It's as simple as that. For example, we make decisions. Looking at something to see if this is a cat or not a cat. When we drive, we see traffic lights. And we know whether we [00:04:00] should drive forward, stop, etc. Those are all processes that the machine can replicate and allow to have a predictive outcome for that process. CRA: Now we, before we had artificial intelligence, everybody was talking about big data. What's the difference between big data and artificial intelligence? Bernard: Well, big data is actually the aggregation of the data. And typically in the world, what we do is that we pull all this data, imagine your sensors in the construction side, imagine traffic data, imagine all the physical data around you and digitise them. Now they should come in a raw form. Big data actually includes the processing, And the transformation of the data such that they're useful to be learned by a machine learning algorithm to produce that insights. I think what people don't appreciate, even as of today, if you were to look at digital maturity of companies to be able to implement AI. Based on some numbers that I got from different groups of people, it's only 10 percent of the [00:05:00] companies are truly ready for that. That's because a lot of their data is in silo. So if they're truly big data ready, that means all these siloed data should be able to find its way to be able to interact, to be able to find context, analytics, or even getting their transaction insights all at one go. And I think that we're still at a very early stage of all companies doing that. CRA: So when we talk about artificial intelligence, it's really about making machines capable of learning, thinking, and making decisions, basically, right? Okay, so we've talked artificial intelligence, but you also mentioned that there are machine learning, and then it also goes into deep learning. So what are the differences between AI ML and DL (Deep Learning)? Bernard: I think the best way to explain it is if you take off the entire universe, Artificial intelligence is that big circle, right? Machine learning is a subset of that circle. What machine learning does is that it tries to use algorithms to learn the data and try to come up with insights such that the [00:06:00] it will allow to give you predictive outcomes. For example, we actually already use it cat or not cat, right? That's a very basic use case and then you can think of even more complex Reasoning systems as such where they can identify a math problem and solve it as a math problem. Okay, so that's machine learning. Deep learning is a subset of machine learning that actually uses the concept of what is called neural networks. Think of how our brains are wired. In biology our brains, we have something called neurons, and the connectivity of the neurons allow us to have certain emotions triggering certain parts of our brain that, whether we do things like reasoning, logic, and then we're doing creative work. So the neural network is the path to allow us to see what kind of things we classify in a background? And if I want to be a little bit more fun, because we're going to get into generative AI at some point, generative AI is a subset of deep learning. CRA: Okay. So we'll come back to that in a bit, but let's give some examples here. So when we talk about machine learning, some of it's the basic things that we use every day. So it's like your spam filtering [00:07:00] and emails, recommendation engines fraud detection on our credit cards. But in businesses, it's about things like predictive maintenance or predictive analytics on pieces of machinery and kits. Correct. Yep. Okay. Bernard: You also have autonomous vehicles. Is CRA: that machine learning or is that deep learning? Bernard: It's actually both. It's machine learning, but using deep learning algorithms. . Because the problem is still the same. Think of the traffic light problem. How does a car recognize traffic lights to go stop? The machine will first captures the image of. the traffic light at that instance, right? In, it's a very well defined problem. It's either green, red, or yellow, right? So that is a form of machine learning, but deep learning makes it faster because it learns through millions of images. You could actually tell, and specifically within, I think, less than a few milliseconds of latency to tell you that this, you should be driving forward because it's that green and you should only stop when it's yellow or red, because that's how it actually processes that. And deep learning is the [00:08:00] reason why this is actually going so fast. Okay. And now CRA: some other examples of deep learning. So what are the ones you can give me? Is that like speech and image recognition? Yep. Bernard: They can use speech and image recognition, except that speech recognition is actually based on sequences. Let's say you and I record this podcast, right? And all the words, et cetera, based on the, how we use the words. We predict what's the next word that's going to come out from our mouth, right? There's a term called sequence modeling which is actually using deep learning. Except that prior to ChatGPT these sequence learning models are actually not very effective. And that is why when you apply this kind of a transcription use case, they don't predict very well. But deep learning the large language models that came out, make a great leap forward, and now things like transcription things like image recognition has been taken to the next level, CRA: now, another term that gets thrown about is natural language processing. What is that? Bernard: Natural language processing is just basically part of, These sequence models that's powered by deep learning. It's as simple as that. Chatbots, if you think about, there's another buzzword, it's [00:09:00] like chatbots, right? What do they use to do customer service? That's actually using these sequence learning models from deep learning. But the problem is that it's very linear and very one dimensional. CRA: So I was looking up for an example of this and they said that NLP is kind of like a translation engine. Yeah. Or a translator for computers, allowing them to understand what we're saying and what we're doing. Correct. Okay. Very good. So we got that one down. Now we get onto the fun part 'cause it's what everybody's talking about, gen ai. So what is Gen ai and why is this a game changer? Bernard: Okay. So maybe I will first start off by saying generative AI is now a field of machine learning that allows you to generate content in the form of text, language audio and video, and based on what is called a large language model, a large language model this actually came from Google, who published a very important paper in 2017 called, Attention is all you need. Just now when we talk about the specific sequence modeling when we do transcription, it's all linear, right? And so [00:10:00] it's always predicting based on your past experience. What the real game changer for this large language model is that it takes all the sequences. That means all relatable use of the same word in all different contexts and put it all together and map out the relationship of the word to each other. And create a model. Because of the amount of data you use, because the large language model is not designed for very small data sets. It's designed for when you take the entire internet and put it together, and then it exhibits something called the emergent behavior, which is what you're seeing in ChatGPT, that gives you very human like responses. That is the real game changer, because the large language model is able to figure out all the different relationships. And when you ask the question, it will look into the large language model to look for those relationships that gives you the context, the correct context to what you're actually asking. And that's what it feels very human, like to a lot of people. Maybe I should [00:11:00] just add because the amount of data to train is of the scale of one petabyte of data. That's the whole internet before I think it's in 2021. So there's a pre ChatGPT world and a post ChatGPT world. This is the only visible internet data. Some people have estimated that the amount of enterprise data that has not surfaced yet, which are confidential, whether it's in Bloomberg or all the different places, is about 15 to 20 petabytes. So the actual, if we are thinking about in today's context, because we don't have visibility to those confidential data, we actually have not touched the real surface of what these models can CRA: actually do. So when we look at GenAI, I've seen it referred to as like a digital artist. Basically what it does is it can create words, text, images, by basically learning from existing words, text, and images. Is that one way of looking at it? Bernard: Yeah, I like to call it the infinite intern. infinite intern, right? It's the same thing, right? We hire interns from school, we give them a [00:12:00] task to do, they come out with a first draft, it's bad, and then we're like, okay, maybe you can improve on these few pieces by giving it information. If you use the same idea, and then you start asking the large language model that responds to you when you ask the question, the first question is really bad, right? The way they answer it. But specifically when you start to Give it more and more information. It starts to word its answers more and more towards what you want to write. If, of course, when it was first launched, it was without memory. Today, ChatGPT has something called memory. If it starts storing whatever you've been asking, it creates a pretty realistic picture of what you're thinking and how you're thinking about a subject. CRA: I'm finding ChatGPT fascinating now because I think you convinced me a couple months ago to get the premium plan so I get the latest version. I now use it all day long for everything. Everything I'm doing for work, for my personal life. We've been buying a home in Amsterdam and having to go through all these contracts in Dutch using the translation in it. It's been absolutely brilliant. We're trying to understand Dutch [00:13:00] laws and be able to research it. And granted, we should also say that these things aren't 100 percent accurate. But it gets you down the path and it gives you a baseline to work with. And I think it's been fascinating and I'm literally shocked about how I've integrated this infinite intern into my life now. Bernard: Now the one thing I think the large language model actually that's able to respond to you, and usually you ask a question via a query interface, right? That process is what we now call prompting. And I want to elaborate a bit, right? OpenAI launched ChatGPT 3. 5, I think, in 2022. Now. What ChatGPT really did is, it shifts the public perception that when people use AI, it used to be people who have PhD or master's in computer science . But it has now shifted to a normal user, a daily user like you and I, just by the virtue of using our language and a prompt, we're able to elicit the same outcome. You give ChatGPT a picture and ask, Can you code up this user interface for me in Python? And then it will generate this interface and then you cut and paste over it looks exactly what you [00:14:00] wanted. The real shift I think in AI is actually taking it from a very expert user, suddenly down to a layman user in the form of this infinite intern. So what CRA: I think is interesting about AI in general is there's the myth out there that AI is just about replacing people's jobs. But with Gen AI, it's much more about human machine collaboration. That's right. And it's not really about taking away jobs, it's enhancing our jobs. It's allowing us to produce better content, be more creative. produce better documents, communicate better as well. And I think it's the one myth that drives me crazy because it's not just about replacing jobs. I mean, this is making me so much more efficient in my everyday life. I have a Bernard: data point for you. So there is this website called therestofworld.org. They write pieces about how technology displaces human jobs or maybe human phenomena. So when ChatGPT came out, The job in Fiverr, all these sort of gig jobs the part about [00:15:00] designer to edit one picture, et cetera, has suddenly went from a lot down to zero. What happened was three months later, the people who lost these jobs from the rest of the world, we're talking about people from Bangladesh, Pakistan, Indonesia, Philippines, suddenly came back up, but they don't call themselves designers anymore. They call themselves prompt engineers to help you to design art. And, this is one thing I want to explain a little bit clearly. It doesn't really displaces the job. It either augments, which is what you and I do. We use it to augment what we do. Like for example, I like to create my legal data helper with all the contracts I have as templates and then generate and help me to recontract as such for you maybe buying a house, right? And then think of all these people in the emerging markets suddenly uses it as a tool to help them to earn a living. So it augments and it also creates new jobs in the process. CRA: So I have some stats for you as well. There was a survey the month that ChatGPT launched and that was November 2022. Now this was in the US but 27 percent of people [00:16:00] used ChatGPT in the first month. You fast forward 3 months from then and 43 percent of workers they surveyed had been using ChatGPT to assist them with their jobs. Fast forward another year and it was over 60%. So it's really, us, the consumers that are driving this new adoption and pushing AI into the mainstream. Bernard: Which is what ChatGPT did, right? Taking something that used to be in the hands of someone with a master's and PhD degree down to the layman user. This is what I call the AI breakout moment it's the killer app. . CRA: We've talked now a bit about ChatGPT, but there's a lot of other GenAI applications. Can you give us a summary of some of those and what you use them for? Bernard: Yeah, that's a good question. So a couple of tools now I think a lot of people would use. I would just give some popular ones just to give you a different flavor. So there is Perplexity AI, which you use a lot for research, searching for information, but it actually runs on ChatGPT. Models, CRA: I use perplexity when I need sources for my slides. So if I want to get stats on a certain topic, I don't want [00:17:00] random generic stats. I want to be able to source it. I want to make sure it's from a credible source. So that's why I use perplexity. But Bernard: you do know that you can also do that in Microsoft Copilot as well. There's a lot of enterprises out there who have their Microsoft Office 365 subscriptions, and then they ran on Copilot as well. So these two are, I think it's the same. Used there is something called Gemma, which allows you to do PowerPoint presentation. So you can write out the outline of your PowerPoint presentation. You'll generate the entire images behind. I've tried it recently for a talk that I was giving you, generates some very interesting images and I just downloaded those images and put it into the background and get a color correct. Then there is Udio and Suno which is used to generate audio music. They're under some trouble currently with the music industry. Then there is GitHub Copilot and Amazon Q which actually do code checking, although helping you to check your data sources within the cloud environments itself. But I think GitHub Copilot do a lot in coding purposes. Just to give you some data point, when I started a development team in Vietnam, So I get 12 [00:18:00] engineers with one head of engineering who's extremely experienced. For this 12 engineers we did a test in the first month. We actually give half with co pilot and half without co pilot. We discovered that those who Copilot have a 50 percent improvement in productivity. We were able to take something like a 15 month project down to nine months. So this is one great use. Then there is HeyGen, generating videos with AI avatars. Take us now. We, maybe we don't need to ask questions or answer questions. We just use our AI avatars to ask and answer questions using, to generate the videos, right? One of Heijian's real capabilities, translation to other languages. So today, Charles, you and I are talking in English, but we can do, use Heijian to translate it into Chinese and send it to China for Chinese audience, Bahasa to an Indonesian audience, or Japanese to a Japanese audience. Then there is the apps that are already here, which we know like Notion, Canva, Descript, they all have their AI tools. Notion uses AI to do the same thing, the summary, the text generation, etc. Canva, You take a image and [00:19:00] then you want to remove background just for everyone's sake removing background is actually AI feature. And then Descript you know edit Audio like word documents you can even change like one word you say wrongly and then you just do an overdub around it and Change the words used to take 30 minutes to train your AI voice right now. It's just takes hours 10 seconds, 15 seconds to do it actually. And then they have just launched something called AI Underlord. Okay. It's actually, the product manager of this group say, oh, everybody thinks about the AI overlords, but we don't think the technology is there, so they call it Underlord. And one of the things you can do is, if I will look at the camera now, and if I keep looking downwards, you can actually run an eye correction on my video, and my eyes will be all towards That camera, perception. CRA: Let's go back to Descript before we move on. So we both use it for our podcast. Yeah. And people have commented about how professional my podcast sounds. Yeah. And I always think, well, I'd love to take credit for this, but literally, it was when Descript a couple of years ago created a button that was called Studio [00:20:00] Sound. That's right. And it removes all the background noises, and it's absolutely brilliant. But for editing the content out, getting rid of all the mistakes, Or when I say words like, oh shit I can edit that out. Maybe I'll leave that one in anyway. But, it allows you to edit that down. It's absolutely brilliant and it makes it so easy. Now I've been doing my videos in there to create reels for the content. I'm, I think it's one of the best apps I've ever seen. Bernard: Before the Underlord feature, it was all using sequence modeling and deep learning. Now it's using generative AI. And then, of course, I have to add two more. One is called Harvey. That's Harvey. That's actually for legal AI, but only in the U. S. jurisdiction. And then LumaLabs are the most more well known for Dream Machine and also Runway for video generations with a prompt. So you can do a lot of these new videos. I think Hollywood would like, you'd be able to run videos some intro videos. I think the best one is still Sora, where you see the Japanese woman walking in a Rainfield Tokyo Streets DEAD: yeah, so those are CRA: fascinating. But how else like how do you use these apps in your everyday life? Can you give me the same [00:21:00] examples of when you might go? I've got to do this task So therefore I'm going to use this solution. I Bernard: think you know everyday life actually AI is already embedded Just think of when you watch Netflix the recommendation engine actually start working from the moment you choose your first show Then you already have email spam filtering which you send emails and then filters away all the junk emails coming out of it ChatGPT opens up very different applications. So maybe the best way to do this is let's talk about video podcasting, right? I usually tell people there are five steps to doing it. First is the sourcing of the speaker, right? So what do I do for sourcing a speaker? I will research that speaker Go to YouTube to check whether this speaker was on a YouTube video. Press a very small plug in called YouTube Summary. I will be able to pull out all the different interviews in the ChatGPT. and research on this person. What kind of questions he'd like to answer, what is the most rare questions that he didn't answer, now, then I need to write an email to him. There is a second plugin. It's also with ChatGPT and even can use Claude as well. It's called GM Plus. It attaches if you have a Gmail, [00:22:00] you just run it, you just type in the prompt of what kind of emails you want, it will generate the entire email for you. And then you, of course, have to make it sound more like you. You can give the tonality and such, so you can do the second piece. Then before you get a speaker for recording, you need to generate questions, right? So you can run the ChatGPT to brainstorm what kind of questions to ask. When you do the editing, that's where Descript comes in. Remove away all the filler words, like the ums and the ahs, and then doing all the transcription. Let's say I may have asked a question and then I need to change a small word. You can just overdub it and replace the word. CRA: I want to go interject real quick and go back to filler words. That was the one that shocked me because when I first used it, And I said, remove filler words, and it pulled up over a 40 minute podcast of like 600 filler words. And in a way, it's kind of scary personally, because you realize how you talk, and I realize how much I use um, sort of, kind of like, these phrases that are not really necessary, Bernard: For, Analyse Asia, we cater to an American audience. The removal of filler words is one of the [00:23:00] most important things for Asian speakers. After that is distribution. What I can do is the same YouTube summary I did for other videos. I do it for my own video. Within a minute, I would just ask the ChatGPT can you generate for me all the different configurations of social media posts from LinkedIn X now, formerly known as Twitter, Facebook. I'm just short one small process and press and send it into all the. Social media schedulers. Once that is done, it's complete. There's so much more content coming out in the world right now because of this. Bernard (2): Yeah. Bernard: And then of course then you have like, Canva, which I use it to do the image editing. Putting the speaker put it company name of first, use some suggestions of what's a good title. And then Dali to generate, like for newsletters images. And then usually what I'll do is I'll give the ID of the picture that's generated by the AI just as a picture credits. And they'll say, what prompted I used to generate this image. CRA: You're using it quite a bit. What about businesses? How do you see enterprise customers or the public sector, how are people leveraging AI or GenAI applications in their Bernard: [00:24:00] workplace? Let me break businesses into two groups first. So let's go with the startups and the small businesses, which actually see the best use of generative AI. And that's just now when we started talking about something called custom GPTs. So CustomGPT is a feature in ChatGPT, where you build your AI assistant. So you need to give you a set of instructions. You need to give it knowledge, which is the files, maybe it could be a FAQ, could be a policy paper, research database, and maybe a legal template, HR template. So there's trained to your needs and what CustomGPT I've seen startups have done is, there's a lot of layoffs going on not just in U. S., but in Southeast Asia, where a lot of HR people now started generating templates using GPT. I've seen a very good use of it this lady from one of the top crypto companies. She's actually using it for things like recruitment, NDAs, everything related to HR. And that's actually replacing two to three headcounts that she lost in that process. Yeah, so that's one. Then there is also people [00:25:00] checking legal documents. I had a assistant called my legal little helper. So just checking things like which jurisdiction things like NDA has any strange clauses inside. And if you can be very specific about the jurisdiction, it's actually a very good checker. Just a checker, right? The last, I think is for people who do consulting work, it's fantastic, is generating a statement of all their work. You have to think through all the legal things, right? Now you just ask them for a template, and then you say, for this jurisdiction, how do you write the template? And then you could even write out the workflow and then ask it to say, can you help me to look at this SOW? Is there anything that's missing? What are the kind of things to do with scope create? Amazing, right? Custom GPT is good You can create a lot of AI assistants and then start a lot and you can also be used to integrate Let's say using something called Zapier where you can integrate into your Google Docs, Microsoft Applications, it could even go into Slack like generally triggering a notification to you and say hey, this is done. [00:26:00] This is not done So you see these small businesses we use. Now in the enterprise space, that's where the second buzzword I think we're going to talk about, which is called retrieval augmented generation RAG came out. Before we go into RAG, CRA: Yeah. When I first heard somebody at an event talk about the next thing in AI is going to be RAG, I'm like, are we really going to use this as the acronym to describe this? But then he went on and said what it is, which is retrieval augmented generation. Which is even worse. Do you like perplexity? I love perplexity. Perplexity is a RAG. Oh, don't get me wrong. I love rag. I'm all in on RAG. It's just a horrible word to be using for an Bernard: acronym. I also find it very difficult to when people tell me RAG, right? I would just tell them that's what perplexity does. That's the easiest way to explain to them. But I think the simpler explanation is to imagine you have this really smart friend who knows a lot of things. And the knowledge can be updated or sometimes inaccurate. And basically RAG gives this friend the ability to look up information in [00:27:00] the updated library before answering your questions. Now, for enterprises, why RAG is important. Now how it works is very simple, right? You ask a question, AI searches through the updated information, like a Google search, then combines the relevant information, then uses the generated AI and pushes it back to you in the form that you can understand better. CRA: This is how I'm going to be using it in an event next week to explain it is in a layman's terms for people who aren't experts. I'm saying we have LLMs already. What this is really doing is creating a content management system of your documents. So yes, it knows how to use GenAI to talk to you that means anybody can interact with it and it'll speak back to you in terms you understand. But it goes and pulls out those specific pieces of information. Bernard: So what's the benefit? You can get very accurate current answers it's cost effective and then the last benefit is building trust. CRA: This is the risk of some of the GenAI solutions when they go out. And I'll run you through what happened with Air Canada for the listeners. So Air Canada has a chatbot, like most airlines. However, one person was chatting with the chatbot, [00:28:00] and his grandmother had passed away. And he asked them, what what is Air Canada's reimbursement policy for a family bereavement? And Air Canada's chatbot said, that it's reimbursable as long as you submit the expense within 60 days. So he goes goes to the funeral, submits the expense, and they decline it. And they said that the chatbot was wrong, and so he took them to Small Claims Court. I think Small Claims Court in Canada is normally like under 1, 000, so it wasn't that much money. Air Canada decided to send their lawyers. And the lawyers debated that Air Canada should not be held responsible for the information that came out of the chatbot, and the person should not have trusted the chatbot, but regardless, the chatbot is an independent entity because it thinks for itself, so therefore we shouldn't have to reimburse you. Talk about a way to ruin your brand overnight. Bernard: Everybody hears about the word hallucination. That was what the chatbot was doing, it was hallucinating. It created its own content. The problem with all generative AI systems is that because it's creating so much relationships between different words, it has the [00:29:00] ability to generate new things that didn't come out. Generative AI, right? The word is very clear. Generative. And hence, this is a clear illustration of what we call hallucination. Now, what the Canada Air should have done was that they should try to constrain as much hallucination as possible. One trick to do, and this is a very common trick that I teach executives in enterprise AI classes in the National University of Singapore, is that the first thing you should think about is If that query doesn't appear within the context of all the knowledge that you have, say in your frequently asked questions, FAQs, your company policies, set an instruction, do not answer the question. And I think it was basically going past these guardrails that this CRA: actually happened. And this can happen. But I want to come back to rag for a minute because I remember hearing about it and I heard a technical explanation from it and I was lost. But like I mentioned earlier, I [00:30:00] don't learn the same way that you do. So I need to have things visually explained. If you see a practical application and I was at an AVEVA event and they were explaining how they're using RAG to fix a wind turbine. Full disclosure, I know nothing about wind turbines. I know there's wind. I know there's blades. They go spinny, spinny magic happens and you get electricity. That's all I know. But what he explained was, in the old days, before you had Industry 4.0 that machine would break down. So the wind turbine would break down. So someone would have to go and then figure out, why did it break down? Find out where the fault could be, then go out, get the manuals, figure, look for the troubleshooting section, and that could be a 300 page PDF. Then find the parts, create the work order, schedule the fix. analytics. That would take a couple of weeks. Then we moved forward and had industry 4.0 So you had the predictive analytics available on the wind turbine. You knew something's wrong because you could see it's overheating. So then you go through that same manual process. You pull up the PDF, you do the troubleshooting, etc. That got it down to a couple of days. What they're doing now with retrieval augmented generation is, you can chat to your computer just like you would then [00:31:00] chat GPT and say, something seems to be wrong with the wind turbine. Can you tell me what it is? And it will come back and tell you, this part is overheating. Can you tell me why that might be? And it gives you a few examples. Can you pull up the manual and give me the troubleshooting section? So instead of going to find that section, which you could do on your own, it just pulls it out and gives it to you. And it says, to fix this, what parts are needed? And it'll give you a list of the parts. Then you can say, create a work order, creates it, submit. So you've taken something that would have taken weeks, knocked it down to days, and now it's down to hours. And the amount of money that saves in downtime and also in resources required to manage the process is staggering. And that's why this topic, even though it has the worst acronym in the world of RAG, I haven't been as excited about any solution I've seen in recent years. Bernard: The part is that a lot of enterprises have a lot of tasks that's extremely manual, repetitive and time consuming. Let me give you another time consuming process. You have a data scientist Always complain when they join companies, they hate it. And actually, why they really hate it is, just now you asked me about big data, [00:32:00] right? It's the data cleaning. Someone sent me a crypto trading data set. So maybe I should try on chatGPT what I did was, at first I sliced the data set down to a small size let's see how good it is. This is. So I was telling my friend, it's gonna take me five days to look at this data set because usually this is what the norm is in the industry, no matter how good I am within a time of 26 minutes. First, ChatGPT told me this data set is actually a training data set. These are all the fields. This is what it is. So the next question I ask, can you tell me all the missing data that's inside this? Then it tells me all the missing data. We don't have this. What should I do with it? How do I treat it? So there are different ways of treating. So one method is called, put it as an NAN, not a value, not a number. Or you clean up those once you have a missing one and export it into another file. And that's what he exactly did. And then after that, when I finished the whole process, I was like, Eh, I've only done it in 26 minutes. Then I dropped my friend like I got the dataset clean. Then he got a shock. [00:33:00] And he's like, What? You told me it was 5 days. I was like, Well, it turns out that with ChatGPT, I can get this done in 26 minutes. The speed of which, how ChargingPT is processing the data also, Makes it really incredible. CRA: So we've talked a lot about how excited we are by AI, Gen AI, ChatGPT, Perplexity, Descript especially. But we have to go and look at the other side of this because AI is not, Without its risks. Yeah, why don't we start talking about the ethical use of AI and considerations we have around that So first thing like what about bias in Bernard: AI? So I think for bias in AI this has been flagged up by a lot of incidents Nowadays, even when I teach a ML operations course, which is how to deploy a machine learning models in the National University of Singapore One of the things you need to look for is the big data piece. It's your training sets a lot of cloud vendors have started putting what's called responsible AI. They look at a few features, one is observability, transparency, privacy, and security. Okay, let's [00:34:00] go with observability. Is this actually observable? Like, think of the situation of CVs, right? I think Amazon made this blunder where they actually scan for certain types of things. And it caused them to lose out in a lot of candidates, right? Then the transparency of the model. How is this model trained? It's a black box. Do we know exactly it works? So for very specific use case like credit scoring, there are ways to check. for that. Amazon came up with a score specifically for credit scoring. So it tends to see, okay, is there any problems with this model train on new data? Maybe there could be a shift. Like for example, there was a very big shift in forecasting before COVID and after COVID, right? The data actually suddenly went through a lot of changes. So they have to do a lot of checks to see what's going on that. Now, large language models is even more interesting now. Entropic, which is one of the competitor to OpenAI. One of the things they have started to know is that OpenAI is like a black box. This infinite intern has this brain, right? What they did is they took some inspiration from human [00:35:00] behavior. I think there is this experiment, I think was done in Oxford in the UK, where they put a MRI, and they test you with certain things when you're happy. They tell you a joke, you're happy, and then certain parts of your brain lighten up. They took the LLM, they started putting in tags in the LLM, and then whenever you ask a question to the LLM, certain parts of the neural network actually light up. Now, why is that interesting,? What you really want to look for is what happens when you start asking very dangerous questions, like, how do you make a gun? when you want to exhibit violent behavior. I like that you almost whispered that. Yeah. We are in CRA: Singapore, so we don't want to say this too loud. You never know. Yeah, yeah, Bernard: Yeah. And then like violent behavior or you exhibit, it lights up certain parts of the large language model. What you're really looking for is all the edge cases. People are now even trying to go into the LLM and Unplug it like a brain to see how the black box is working. So the bias in AI is actually part of [00:36:00] that piece that they're trying to solve. Not just in the old machine learning models, but the new ones. Okay, so what about privacy concerns? Should we be worried? Okay. Well, this is good news for you, there is a new emerging field, which means another buzzword, called federated learning, that deals with privacy concerns. What has happening is Google and Apple that runs this kind of ML models for your mobile phones, right? I think specifically more for Apple because they're very tight about privacy. Now, what is happening is the data is stored on the smartphones, right? And they don't want to infringe on your data. Federated learning. What it really does is that it takes whatever data you have on your smartphone, it masks it in a certain form, sends it to a server, and trains it, everything just in numbers and bytes, never attributing where this data come from, and create a black box to solve, like, maybe things like scheduling AI, as AI assistant to do certain things. But the data actually never left your phone. It's just presenting it into chunks of bytes. Now what Federator does is that it masks off the data, it [00:37:00] allows you to train the models, don't attribute it where it works well. Where federated learning really works is in healthcare. One of the key problems is hospitals are very reluctant to share medical data, right? And specifically even patient data, because there are two things. One, they're worried that the other hospitals will come after their patients, as a customer side. And then the second thing is this data has has some form of compliance called HIPAA. Which you need to deal with. In order for a compromise and we really need this data to train better models to deal with things like diseases, finding certain symptoms, maybe there are certain diseases actually due to certain symptoms based on the data, federated learning is actually being applied. We mask off everything, you trained it, and then the data is shared between hospitals. So there's a whole work that's being done in the UK and the US. CRA: So the next myth I want to cover is the one that people think AI is just about taking away people's jobs. . Bernard: Is that true? For the short term, within the next three to five years, AI will not replace [00:38:00] jobs, period. The rumors of our work being replaced by AI are wholly exaggerated. I think what I see now is AI actually augments humans and removes away the repetitive, manual work. That's all it is. Now, when the reasoning capability of the large language model becomes really more superior, AI We'll start to see our work also shifts. There's a lot of people who always have very bad job satisfaction. Think about that, right? Everybody who goes into the industry hates their jobs. It's because a lot of times we're being made to do a lot of this repetitive work. And actually we're not being made to do the more important, the creative work, the analysis work. So what AI will do is to remove that layer. Which will affect people who only want to do very repeatable tasks and shifts it towards a more creative side. CRA: This is a great way to look at it because I view AI as a tool right now. It stuff that can help me do things more efficiently. And the example I'll give is [00:39:00] when I was at university a long time ago. Way before we had Excel or even Lotus 1-2-3 When I was studying finance and economics, we had these financial calculators. My teacher was concerned that if you used a calculator, you wouldn't actually learn the financial formulas like NPV or IRR. So he made us memorise these formulas for an exam, I still passed the exam. I never once used that formula again. If I had, when I went into banking, if I would have pulled out, my pen and paper when my boss wants me to run a calculation instead of using a calculator or Excel, they'd fire me. Similarly, if you're in the office right now and you go up to one of your interns and say, listen, I want you to go and research this topic for me. And they say, okay, sure, no problem. They shut down their computer and start walking to the library. You'd fire them. You use search or you use chatGPT There's better, more efficient ways of doing these things right now. And we need to get around this whole idea that, oh, it's just replacing jobs. It's complementing our jobs and making us more efficient. So. Bernard: So, Entrepreneurs have the worst jobs because they have 7 every day, right? So let me give [00:40:00] you a very good example of that. Capitalization table calculation. That means calculating your shareholder as such. When I have to do that for all my startups, it usually takes me about one to two days of work because I need to get the formulas correct, draw it on an Excel spreadsheet, check the, Formulas as such. Recently I did this, so I basically went to JGBT and do a very good promise. I said, I want to do a capitalization table. This is the initial shareholding between the founders, and we have this amount of interest coming with this investment. At this valuation, we want to give 20% discount and not 20% discount. Can you calculate what the shareholding is step by step. Now, when you say the word step by step, what it does is it tells you, okay, this is the formula we are using to do the calculation. This is what the discounted rate looks like. Then this is what happens. This is the shareholder before and after in the table form. CRA: It's impressive in the quickness which we can get things done and how efficient we can become. Bernard: Correct. And I think people don't appreciate that some of these tasks are actually extremely manual. And you really want to get it out. CRA: The thing is that [00:41:00] we understand the value of using AI to do tasks. But a lot of people don't. Because there was this recent survey that came out of China. And the survey said that 80 of Chinese students use AI to assist with their assignments. And it was posted on Straits Times and CNA here. And of course people are coming back saying AI is cheating. And I'm like, I'm not worried about the 80%. I'm worried about the 20 who aren't using it. Because what does that mean for their future? Bernard: One interesting data point is a lot of computer science faculties in the US and UK are now allowing their university undergrads to actually use Microsoft Copilot or Amazon's version of Code Whisperer to actually check their coding. What their rationale is that it actually helps you to check for bugs, which is actually time wasting. But what your job now is to look at the issues, whether everything codes up, has security issues where you audit that code, right? So there is a lot being used to actually deal with this. I think the challenge we're going to face around this is in the education sector. [00:42:00] Because what we want to see is CRA: what you were just talking about, where universities are allowing the use of it. But that means that teachers have to learn how to teach in a new way. So one initiative that I'm going to be tracking is out of South Korea, where they're throwing in 70 million dollars to help leverage AI in schools. So what they're doing is creating CRA: AI AI textbooks. But what they're also doing is investing in the network infrastructure to make sure the schools have the required technology infrastructure to make this work. But they're also creating, I think it's like 1, 500 digital tutors. Those are tutors to help the teachers adapt to this new way of teaching. And I think that's brilliant because it could cause a lot of problems. If we don't get the teachers to modify it and understand how to leverage these technologies, you're going to hold back those students. It's kind of like me, it didn't help me to spend three days memorizing financial formulas for one exam and never use them again the rest of my life. CRA: It's the teacher, it was his responsibility to make sure we understood finance, not us memorizing formulas. CRA: On this cheating thing, or is it cheating, or use of AI, in China, at some universities, they're allowing up to 40 percent of their [00:43:00] documents to be written by AI now, by ChatGPT. Bernard: The way I would encourage people to use ChatGPT in writing is the following. First, write out what you want to say. Write it in your train of thought. And when you write it in your train of thought, all you just need ChatGPT to do, can you help me to check Or rewrite in a better way. And this is where it is. It's about making you communicate more effectively. That's right. It's not about, it's not about generating that first draft. It can be used to generate first draft if you are totally, blanked out, right? But it is very good in helping you to reorganize. Like, for example, I write out, say, an article in a certain way. And then I say and I ask Chachapiti, Can you organize the same article with all the points I've made? But in different variations and tell me exactly how you think of the outline. That is a better use of the AI than me writing. CRA: Now let's move on to the future of AI. What are some of the emerging trends that you're seeing? So Bernard: I see six things happening, and I'm just going to list them like by point. So the first is software will be commoditized. [00:44:00] What does it mean? There's going to be a lot of rise of AI coders. Software engineering used to be a fixed cost, if not a very high cost within companies. But with the emergence of AI coders, or even today we have low AI coders, what you're going to see is that software is going to be produced very quickly. So you can even get to a point like you have the AWS for coding. That means you can hire variable engineer variable engineering resources. Whether it's going to be every company building its own, or maybe there is a unifying software. We don't, I don't know the answer. If I know the answer, I'll be very rich. Second would be the improvements of productivity. It's not just in functional parts of the business and cost savings, but going towards revenue generation. Today a lot of the AI is still about cost savings. There was a very good article by Sokoa Capital about about 600 billion is invested in AI today by corporation, VCs, et cetera. What is the ROI going to be? That is coming up very soon. I think [00:45:00] a third trend would be recalibrating all jobs. How do we redesign jobs, . The fourth one, which I'm very excited about, is that this will be the first time we can finally see to have a helper robot in our house. Because the large language model can act as the brain for the robot. And we're going to shift from what we call digital experiences that we see today, towards real world experiences, where the robot is able to have an AI brain, learn the processes much faster like an infinite intern with physical work, and then the fifth one is a little bit, is the whole debate between open source and closed source models. Just for everyone the GPT 4. 0 by OpenAI CloudTree is all closed source models. Gemini as well for Google. This week, Meta, or Facebook, launches their LamaTree model. And it's almost at competitive level with. Open AIs, ChatGPT 4. 0. Okay, what does that mean? Is AI going to be confined to a few companies? [00:46:00] Or can everyone generate their own AI models? That's a big question. And with that open source, closed source question, there's a second, the other question that we talk about, small language models, right? You start to see Microsoft, Google, why small language models really important? It's because we're trying to put the LLM into the phone. And that is going to be what is going to drive more use cases out there. My last case, is the improvement of the database infrastructure within computing. I think a lot of people do not appreciate today a lot of the databases that we use for big data is what we call scalar databases. It's the rows and columns of your Excel spreadsheet, but pull it to an infinite segment. With AI, we are now able to put all the information into a vector. And if you were like me, a theoretical physicist, we love to work with vectors and matrices. A reconciliation for bank account statement and a purchase order statement is actually searching 58 rows and columns for 58 pieces of information and making sure they [00:47:00] match. I already have that information now in a vector form, it's just one One dot product of two vectors, just to check how similar they are, and reconsideration is now one computation. So these are the things that will change the way how we think about compute. So CRA: the other thing people talk a lot about is AI's impact on society. So what are your thoughts on that? What does the future hold for society with these new Gen AI type models? Bernard: I think the Gen AI type models today, is still about augmenting the human experience, right? It's about helping you to work better, faster, cheaper. I think that these are things that will not change. The things that really change is the tools of that. You have to treat AI as a tool. You cannot treat it as the be all and end all. You cannot let AI run mission critical applications. You already seen last week where we have the cloud strike incident. That is without AI. Can you imagine if it's an AI triggered event and you do not know where to find it? But that [00:48:00] doesn't mean that it is not going to happen. But I think that if we are going towards that world, we better have a lot of guardrails CRA: on it. What I find funny is there's certain people who will probably think, Oh I can get AI to do my job. The thing is, if you're smart enough to figure that out, your boss is as well. So you might want to use ChatGPT to write your next CV because, if that is your job, it's probably not going to be around for long. Bernard: I remember Keynes made this comment that human societies will eventually reach a world where we work on a four day week. And then we need to find jobs for the rest of our lives. I think we are not there yet. What I have noticed was we created Excel spreadsheets. It was supposed to tell us that we reduce our job. We ended up creating more work hours for ourselves. For every new technology we created 10 more problems for ourselves. CRA: Well, that's what's going to happen, I think, with this one as well. So next question on this. We've talked a lot about AI, but are you going to see them adopting AI right now? And what challenges will companies have as they go down this path? Bernard: You and I agree that digital transformation is not about technology, but about people. A few [00:49:00] things , for business executives to think about using enterprise AI. The vendors are going to sell you the solution, but there are three things they don't tell you, and that's where everyone wasted a lot of money, even to the day buying up Nvidia chips as well. The first is, your business case and your use case or problem is not properly defined, number one. Number two, your systems are not designed or even ready to implement AI. Remember the data silos that I talked about. You need to move those systems into a rack, another buzzword, to create that AI for that. That will take you at least another 5 to 10 years. The last part, , they didn't tell you about, is the high cost for data collection, model inference, and training. Now data collection actually costs you at least $30 $100K Which is why you don't see construction always using AI to look for safety. There's all these safety AI companies that always try to sell to construction companies. It's because the cost of actually acquiring that data is extremely expensive. Now, what has changed is generative AI, we can take [00:50:00] the incidence of all the danger zones and ask prompt to say, can you generate all the images where these are the dangerous areas with humans on it? And then they can actually use that as a prompt to generate the synthetic data. But there are already reports or research papers showing that synthetic data doesn't really do that very well. So where it takes us, right? All in all, business will take some time. What I would urge business to think about is a few things. One is governance, where your data and security are. Things that do not change. Okay. The things that change is you need to think about your workflow in implementing AI. You need to redesign the whole thing. Just because I created a workflow for myself five years ago using, doing, just Analyse Asia podcast, I have to spend two, three days to redesign them in the AI world because what I used to do, if I add ai, it actually creates more work for me. So I think that there's a lot of rethinking re-architecting of process, which means that we still have some jobs for a while for ourselves. CRA: That's good. I wasn't looking forward to being unemployed just yet. I want to come to your startup now, cause we've gotten this [00:51:00] front of the podcast and we haven't talked about your new startup, Dorje.ai. So of course it was going to be AI. That's, that should not be a shock to anyone. Can you tell us a bit about the company and what problem you aim to solve? Bernard: So I'm just going to be very brief about it. Dorje means Indestructible and powerful transformation in Buddhism. So the vision of Dorje.ai is to build the next generation of business operating systems that will take us into an ERP less future. So what is happening today is that the problem is a lot of medium to large enterprises either have no ERPs or have one that doesn't work very well. They would sell the vision of the insights, but it never showed up. The reason is actually data is stored in all different silos. And worse, the ERP ledger is extremely restrictive. Just when I was in AWS trying to help an energy company to deal with the data, it took me two years. And we have to build something outside to pull the data into a data lake so that they can run the machine learning. That's part of it. And of course, the ERP cost is going. 5% year on year. Remember I say [00:52:00] software is commoditized. Why are they charging 5% year on year? Now, one more thing is that to customize all the workflows, the automation within it, you need to actually pay the system implementers. That's usually about 10 to 15 K per workflow. So it's extremely unsustainable, right? So what can we do? So what Dorje.ai is trying to do is we're trying to build a new proprietary AI infrastructure. That organizes the data. Like what the financial teams do. But it will take you from now world, which we will still try to integrate with all the existing ERPs to EER pillars world where you can think about problems like reconciliation's done very quickly. I'll give you one very simple feature. Think of expenses. You take a picture, you use AI to capture all the data, and then you submit to your manager, and you're the manager. How many times do you really look at the data for, to see whether you approve or not? May I guess? How many of you? I have no idea. I checked it with all my executive friends, nine [00:53:00] out of ten. Never checks. Now, why do that? Remember the custom GPT example? What if I take the picture, the AI captures the information, gets it the correct call center. Now, uses a AI agent, okay, and goes to your travel and expenses policy and pull off if your expenses matches the travel expense policy, you should just auto approve and pay you. Why do you need approval? It should only go to the manager where you have an expense that is exceeded budget or you couldn't identify that. Obviously managers don't want to keep approval and not approval. So it's not just about making ERP, very hard and very bad to use. It's also making ERP great again, or you want to make it really, so it make it easier to use. And this is one of the things that we are trying to solve. And hence, what we want to do is actually to be able to build the next generation of, ERPs that can help businesses to actually get to what they really want and not what the current [00:54:00] software companies want them to be. So within the first two months just to also come up we've done the customer validation. We've raised a quarter of a million within four weeks. And now I'm actually focusing on product building, but I can talk about it more when we talk about things like doing very boring things like accounts receivables and accounts payables in this AI infrastructure. CRA: Well, you're using it to solve a problem that most companies have, so I wish you the best with that. But I want to look to the future of AI, but not about the trends, but what would need to happen for us as consumers, as businesses, as a world, to make the most out of AI and not turn it into the mess that we tend to do with a lot of other technologies? Bernard: I think you should learn how to use the tool, but you also should be aware what it cannot do for you. I think a lot of people are being too techno optimistic or trying to go to the extreme that this will replace everything. No, it hasn't replaced everything. Okay, trust me, but what it does is it's actually getting better. And the one thing I think people should take home is that you [00:55:00] can do more with less. With the kind of AI tools today, your productivity and automation, you can actually build a company with two, three people. Now, what is the risk? It's AI making silly mistakes, right? Don't do mission critical applications. Okay, if we do, we better be sure that it will work. We're talking about mission critical like landing a plane on a runway. Just for everyone, Airbus planes are all autonomous. All the pilot need to do is to take control of the manual control whenever is required. Even the landing takeoff is now totally controlled by the plane. But it's mission critical, it's designed and matched certain certification CRA: standards. But what's important, what they're doing is they're providing human oversight on a function that impacts human. Correct. That's the, that's like the key premise underneath ethical use of AI. Is that if it's going to impact a human, there needs to be human oversight. This is a great tool It's creates great opportunities, but like with everything else. We need to use it wisely. So I want to close [00:56:00] out with one final question for you. What advice would you give to someone who's not using Gen AI tools like ChatGPT today? Bernard: So I'm going to do it by saying, take a learn and be curious attitude. I recommend everyone go to any online learning platforms, like Udemy, Coursera, Udacity. What I'm trying to say is , everything that even a person with knowledge in AI, also I have to start to relearn it. What I would urge everyone to do is just do the same, just do the same. Just as a very good example, my wife has actually gone through Coursera's generative AI for everyone. And I think now she's got a better hold of how AI can be applied into her business. And in fact, she was the one who alerted me how to make custom GPT work as an AI assistant for myself. And that, that prompted me to really dive very deep and realize, Hey, actually I can do all these things. CRA: I think I'm going to have to go out and do that Coursera course. So I'll ask you Ying for a link for that one. So I can make sure I do it. [00:57:00] Thank you so much for helping me demystify AI and I have no idea how many buzzwords we covered off today, but hopefully the audience got some value out of that. With our two different learning styles and we have some understanding it. Thank you once again for being at the multiple guests in the podcast now and I wish you all the best with your startup and everything else going forward. Bernard: Thank you, Charles. And thank you for actually getting me to think how to explain it to the layman. I think that was super helpful. Well, hopefully CRA: that's been helping you and also the audience as well. All right. Thank you again. Thank you.

© 2025 by Charles Reed Anderson

bottom of page