Skip to main content
Scroll For More
Listen

Andrew Ng In Conversation with Toby Walsh

Hear globally recognised leader in AI and Co-Founder of Coursera, Andrew Ng, in a thought-provoking dialogue with UNSW AI Institute’s Chief Scientist Scientia Professor Toby Walsh, shedding light on the latest trends, challenges, and the future of AI.

Transcript

UNSW Centre for Ideas: Dr. Andrew Ng is a globally recognized leader in Artificial Intelligence, He is the Founder of DeepLearning.AI, and an Adjunct Professor at Stanford University’s Computer Science Department.

As a pioneer in machine learning and online education, Dr. Ng has changed countless lives through his work, and has authored or co-authored over 200 research papers. In 2013, he was named to the Time 100 list of the most influential persons in the world.

Here Andrew joins in conversation with UNSW AI Institute’s Chief Scientist, Scientia Professor Toby Walsh, as they shed light on the latest trends, challenges, and the future of AI.

Toby Walsh: One of the things that's been driving this ability to scale systems, that’s been scanning data, scaling, compute… I wonder, I mean, in some sense, looking back at what you did when, when you were playing with YouTube and discovering cats, did you see that scaling was going to have the impact…? Because in some sense that was perhaps one of the first examples where we threw a lot of computing and scale at the problem and saw something interesting emerge.

Andrew Ng: Yeah. So this is, boy, this is ancient history. But many years ago, when I was at Stanford, one of my students, Adam Coates, had generated a very early version of a scaling [indecipherable] thing, where using just CPU's, Adam showed that, basically, the bigger he could build a model the bettered it did, or as far as we could scale at Stanford. And it was inspired by that draft that I went to Google and I, you know, kind of respectfully asked Larry Page, ‘You have a lot of computers, would you let me use Google's computing infrastructure to scale your network?’ So that's how Google Brain started. It was actually a really oversimplified direction. I told the Google Brain team, ‘Let's just build really, really big neural networks and focus on that as a primary mechanism for driving growth’. And fortunately, that recipe worked out. I did get one thing wrong. I think I over emphasise, you know, unsupervised learning, but unfortunately supervised learning scaling data worked really well. Then Google Brain published a transformers paper, and I think, self-supervised learning continued to take off. But, I think scaling up has had a very good run and I don't see it running out of steam yet. I think that for at least a few more years, maybe many more than a few more years, the need to scale computer data will drive additional progress. In fact, we'll see the text processing revolution here. Right? Chat GPT, clearly here. I think we’re at the beginnings of the image processing revolution, where large vision transformers are, kind of, not really working yet, they are kind of starting to work really well. And then, there's another barrier. I don't think any of us have enough computer processed video, which increases by another one or two orders of magnitude, the compute requirements that will convert from there. So I don't think the scaling is running out of steam for quite some time yet. 

Toby Walsh: So that's going to be a really big impact in somewhere like vision. What about robotics? Because data’s harder to get. Robots aren't as easy to scale like cameras are or text has been.

Andrew Ng: So, you know, many years ago… so my PhD thesis was on reinforcement learning and I used zero policies, and my team worked on a robot. So I love robotics. So that caveat, I love robotics and you know, I still am an amateur drone pilot and so on. I don't, I'm not convinced that in the next few years robotics will undergo the similar infection that was seen with text, and that was seen with images now. Mainly because of what you say Toby, because of the data problem. A lot of my friends, I'm sure a lot of research here at UNSW, was, you know, trying to find that recipe for unlocking that rapid growth.

Toby Walsh: So what about transfer learning, learning in simulation? Are you going to not be able to get us over that hump?

Andrew Ng: You know, I think robotic perception would work really well, and I think that would have implications on self-driving cars, for example. But every robot is just kind of unique, that getting enough data for the standard scaling recipe to work, still seems quite challenging. And through exciting projects like, can we accumulate data for robots all around the world and mush them together, and do transfer learning? But even when you have all sorts of academics that collaborate to share data, you know, the amount of data is still orders of magnitude less than the amount of text data we have, for example. So again, maybe there'll be a recipe… I do think robotic perception is going to work really well in the next few years, because you can transfer from text and video off the internet, for example, especially for vision sensors. And when robotic perception works really well, that will unlock a lot of value. But I think the end to end… for example, I think self-driving cars will make a lot of progress in the next few years, that’s been frustratingly slow, right? Knocking out that corner case of self-driving cars. But I think that vision transformers pre-trained on internet images with some adaptation, I think that will lead to revolutionary progress. But outside of perception, I’m struggling. Maybe you will figure it out, you can tell me how to do it. 

Toby Walsh: Well, this is a challenge for the young, ambitious students in the room to go off and solve robotics that we haven’t been able to solve. Text and vision, yeah.

Andrew Ng: Oh, and in fact, even though it may sound like I'm being slightly pessimistic with robotics, in the next few years or something, these are great problems to work on as researchers. Because our research problem should be targeting what might work five years or even ten years from now. So go solve it. And then I, I don't know how to make it happen in two or three years, but maybe one of you will make it happen, you know, in two or three years or five years, and that would be great. 

Toby Walsh: So I've got to put you on the spot because you brought up self-driving cars. So how long before self-driving cars are going to be driving? Cruise just had its licence taken away in San Francisco. 

Andrew Ng: Yeah, Cruise has its share of problems. I think Waymo is doing decently well. I find that, you know, technologies, we tend to be good at predicting what will happen, but not, when. I recently launched a project… and there was a project I was working on, I won't say which one…  and I think my team said, you know, ‘We can totally launch on exactly  the month we predicted, just not the year we predicted.’ I think it'll be a handful of years. I think that a limited context, limited  geofencing areas, we'll start to see them much more in our day to day lives, in, I think I'm going to guess five years, but I really don't know. I find it really difficult to predict this one.

Toby Walsh: So some of the other challenges that still face us. So even with, if we go back to large language models. So they're notoriously not the best reasonsers in the world. My favourite example I discovered just this week, which is, you asked Chat GPT, ‘How many b’s in banana?’ It says ‘There are three b’s in banana.’ So how are we going to solve that problem?

Andrew Ng: I don't want to defend Chat GPT, but it turns out that because of the tokenizer, counting b’s in bananas is really difficult. 

Toby Walsh: But there you're right it's tokenizing words, not letters. So yeah, that's a particularly tough example. 

Andrew Ng: But he puts spaces, banana, puts spaces between every letter, I think I’ll get it right, because it tokenizes each letter. 

Toby Walsh: The error there is interesting because there are lots of people who ask the question ‘How many a's in banana?’, and ‘How many n’s in banana?’. Because people get that wrong. So there's lots of data on the internet that it's been trained on. So presumably the reason it's saying there are three b’s in banana is because all the questions about letters in banana have the answer two or three, and it's whether it's two or three. 

Andrew Ng: Yeah. 

Toby Walsh: Similarly, if you ask it, ‘How many s’ in banana?’, it says, there are three s’ in bananas.

Andrew Ng: I see. Yeah. Yeah. 

Toby Walsh: Because it's going to tell you two or three to any question about counting letters in bananas.

Andrew Ng: I see, that's interesting.  Yeah. You know, I think that we know LM’s do hallucinate, and then fortunately we have a few technology paths to reducing hallucinations. This one, I think this is a tokenizer problem. So how do we solve that? You know, I think that’s some specialised thing. But then I find RAG — retrieval-augmented generation — really promising for reducing hallucinations in other contexts. And there are these other things that won't solve the bananas thing, where, you know, prompt… we run the same query ten times and see how consistently the LM gives the same answer, that turns out to significantly reduce hallucinations. Retrieve-augmented generation is when you do a classic retrieval step, find a trusted, authoritative document, and ask the LM to answer using that doc in this context, and maybe even… that helps you drown the LM answer in the specific trusted document, so that helps. So, I feel like hallucinations and some really bad examples of LM’s making things up, but with enough paths… and even humans, sometimes we get our facts wrong too. So I think, you know, I'm encouraged by the different technology paths to reduce different types of hallucinations.

Toby Walsh: I mean, it goes to this fundamental schism that's always haunted the field between the symbolists and the and the machine learning, or the problem lists that, you know, if you look at, for example, trying to get, let's move away from b’s in banana, but you look at trying to get a language model to do arithmetic or multiplication, and it's clearly struggling to do the abstractions, and use the sorts of symbols and abstractions that are needed to solve those problems.

Andrew Ng: So if you want to get an LM to do math, this is… I don't really want to, because I have a calculator, but if I want to get an LM to do math, this is how I would probably approach it. A human brain is terrible at math. If you ask me to add to huge numbers unassisted, you know it's not that easy. So, if you really want to do it, humans can do math decently well, when you give us a scratch pad, right? intermediate steps, signals in between. So I think the only way to fine tune an LM, to have it generate, kind of, think out loud, or to generate the intermediate steps, to do arithmetic. I think it should be much better. Just like a human brain. Analogies between LM’s and the human brain are always dangerous. But just as humans need a scratch pad to do, you know, arithmetic, say? I think if you fine tune an LM to use a scratch pad to do arithmetic, it will do much better than having it do the calculations unassisted. So I find that these recipes, if you really want to solve it, you could do it. But then, I think the human brain is really bad at math as well.

Toby Walsh: So I want to move on to the opportunities. Friends coming back from Silicon Valley say, ‘You can't have a conversation in Silicon Valley these days without people talking about generative AI and the opportunities, and what they're doing.’ Is that true?

Andrew Ng: That is true. My friend was visiting from overseas. I won't say which country, but he hung with us in Silicon Valley for about a week. Then he went back to his own country, and a few days later he texted me, he said, ‘Yeah, I’m sitting in the coffee shop and no one's talking about AI, it’s so weird.’

Toby Walsh: So, is that tsunami going to come here do you think? Or do you think the wave is going to break? How is it going to be?

Andrew Ng: So, it feels like… I’m going to try to give an actual answer. It feels like the wave’s not yet broken, but there’s one thing I worry about. So if we look at the fundamental value from AI, I think that there's a path for significantly greater value creation than has been realised so far. Because generative AI and, you know, supervised learning are general purpose technologies that are useful for a lot of different tasks. In fact, for research papers, GPT’s… GPT is general purpose transformers and general purpose technologies, and other economic analysis. You know, depending on who you believe, some large fraction of all human work is amenable to automation or augmentation by, you know, pre-trained transformers. And I think some published numbers like 15%, others estimate 50%. So I don't know, but whether  it's 15 or 50% of all human work. That's just a lot of stuff that AI can help with in the next few years. It will take us a long time to get there, because, just as with supervised learning, it took all of us collectively a long time to figure out how do you use supervised learning in analysing X-Ray images? Or how do you use it to rent RoboCop, or how do you use it to make ships more fuel efficient, or to sell more ads, and so on? Took a long time to identify and build these use cases. So with generative AI too, I think it will take annoyingly long, say, you know, like a decade or something, to go and identify and build all these applications, all these use cases. So I think this will continue to go on for a long time. The one thing I worry…

Toby Walsh: Don’t you think it’s going to happen quicker this time? I mean, just as data points, right, as an example, OpenAI is supposed to have $1,000,000,000 annual revenue now. A local example, Canva, an Australian company, they've got their magic AI, generative AI tools. They've got 100 million users, in less than six months. So, supervised learning never got to be adopted that quickly. 

Andrew Ng: Oh, so I think that supervised learning for online advertising is worth more than $100 billion to Google today. And I think that there are millions of developers working on supervised learning applications. And so I think it's fantastic that, you know, the reports that OpenAI has surpassed the million [indecipherable] and I think the ChatGTP enterprise feels like a compelling product and maybe the consumer thing, is also a fantastic product. And I think there's still a lot more upside, and it does take a while to go from billion to a hundred billion dollars. But I hope the industry will keep on growing. 

Toby Walsh: So there's lots of, I can ensure, lots of entrepreneurial young students in the room. Have you got some advice for them, in terms of how to exploit generative AI? How to ride this wave? 

Andrew Ng: Yeah, so just a few thoughts. So turns out a lot of media attention with every wave of tech innovation tends to be on the infrastructure layer. The media likes to talk about, you know, Google, AWS, OpenAI.. cohere that list of infrastructure to providing companies. Which is fine, nothing wrong with that. But it turns out that this infrastructure and developer tooling layer is hyper competitive. Look at all the startups chasing OpenAI. This one other sector with huge opportunities that tends to be underappreciated, which is the application layer, built on top of the infrastructure layer. In fact, for the infrastructure and tooling companies to be successful, the application companies have to be even more successful so they can generate enough revenue to pay the infrastructure layer. So what I'm finding… at AI Fund, which is a venture studio that I lead, we often find applications where, you know, there's a huge market need, but the number of competitors is much lower. One of the nice things about UNSW [indecipherable] as well, is you have such a diverse collection of faculty and use cases that I suspect you run into a lot of applications where the competition intensity isn't as high, and there'll be lots of chances to build entrepreneurial activities. 

Toby Walsh: I suspect there’s also going to be a technical reason for that as well, which is this idea that the quality of the data and the fine tuning of the data to the application is going to really make the technology shine.

Andrew Ng: Sometimes. I find that the analysis of the importance of data, that there's this idea of a data mode like data, maybe business defensible. I find that the analysis of data as a defensible advantage for a business, is sometimes very valuable and sometimes not at all, or much less valuable than most people think. To take one example, take self-driving cars. It turns out that to go from 97 to 98% accuracy, just making up numbers, is way easier than going from 98 to 99% accuracy. So all that data you collected to get to your level of performance is much easier for competitors to catch up to you than for you to extend your lead on the competitor. So their data is almost an anti-mode so it’s easier for people to catch up to you then for you to maintain your lead. But then the dynamics are,  if you're in a winner-takes-all market, you only need a tiny advantage to take the entire market. So whether or not you have that kind of a superstar winner-takes-all dynamic, also factors in analysis. On the flip side, I think Google and web search through the real time feed of knowing what people click on, which does change day to day, driven by the media and other changes in society, that is valuable. So the fresh data that Google has, is actually very difficult for others to replicate. So I find that the analysis of how defensible data is, and how important it is to get a business started, you know, I find is actually very complicated. Sometimes it's huge and is really helpful, and sometimes, you know, startups with a modest amount of data, can get something working well enough that customers basically don't care if your product is better than theirs, and that’s enough for them to keep on building up. I mean, one more example, speech recognition. I think for a lot of applications, you know, speech recognition works well enough that if you eke out another .1%, customers kind of don't care beyond a certain point. So some have a little bit less data, and that’s just fine. You can start to enter the business, and accumulate data and keep going.  So I find the analysis of data as a mode is very complicated. And then actually on average, I think people tend to overestimate the value of data, if I’m being really candid. It is very valuable, but I've seen more big company CEOs overestimate the value of their data than underestimate it. But sometimes it is very valuable. So as long as…

Toby Walsh: No, no, no, we’re gonna park that thought for a moment. I'm going to come back and ask you about data in a second. When we start talking about education, because I think there's an important conversation there about how we’re going to exploit the data. But let's pull back the focus a bit, and start thinking about the broader goals of artificial intelligence. Both you and I have been working in AI for many years with ambitions to be able to solve AI, right? And we've seen supervised learning have its moment in the sun, over the last ten years. We're seeing generative AI catch the light now. I mean, you started out with reinforcement learning. When is… reinforcement learning always struck me as a beautiful idea. That's how, in some sense we learn, right? We get… we're sitting in the world, getting reinforcement or reward back. When is… maybe you should explain, first of all, to non tech people in the audience, what reinforcement learning is, and then explain, when is reinforcement learning going to finally succeed and have its moment. Or become part of the success story of AI.

Andrew Ng: Yeah. So reinforcement… supervised learning is good at leaving things, right? Input, output, level data. Reinforcement learning is, I think of it as akin to training a dog, you know? My family used to have a dog, and it was my job to train it. So, how to train the dog? Whenever it does something good, you go, ‘Oh, good dog!’ Whenever it does something bad, you go, ‘Bad dog’. And hopefully over time, it learns to get more of the good dog rewards in fear of the bad dogs. And this is how I trained a helicopter to fly autonomously, right? We let the helicopter fly, in simulation. When the helicopter did a good job, we go, oh, good helicopter! And when they crash, we go, bad helicopter! And then over time it learned to maximise the good helicopter rewards frequency. I think there was… seems to be, a key component of how humans learn, but I think most of us really have no idea how the human brain works. So technically, a key part of training LM’s is RLHF, reinforcement learning from human feedback. So there's a key component of training LM’s that RLHF for robotics. I don't know. I'm not seeing signs that it will unlock that rapid growth in the next two or three years. But maybe beyond that, hopefully. In tech, so many things are, you know, what doesn't work — right? — three or four years ago is something that, because of a breakthrough, ends up being really valuable. I will say, though, even though generative AI is what a lot of attention is on right now, I feel like generative AI is additive to supervised learning, in terms of increasing the set of tools in the portfolio. It turns out a lot of generative AI is based on new networks trained on a lot of unlabeled data, and we have a lot of unlabeled, unstructured data, meaning text, images, audio, maybe video. Which is why generative AI pre-trained on, you know tons of text, is so useful with you writing a short prompt, as a large language model. But it turns out that a lot of the world's data is structured data, meaning tabular data, like, lists of numbers kind of in a giant Excel spreadsheet. So if I was a spreadsheet showing departure and arrival times of ships, you know, that's structured data like a giant Excel spreadsheet. And it turns out that, you know, there are trillions of databases of ship times on the internet, to pre-train and apply to my data. So for a lot of structured data, which is actually the majority of data in many enterprises, supervised learning is actually still the better tool for that. I think the PR and what people hear about, and the online advertising, massively lucrative still, and that's much more supervised learning than generative AI. Although generative AI is making inroads into, you know, for example writing better ad copy.

Toby Walsh: I've always wanted to ask you, are you a good helicopter pilot?

Andrew Ng: Yeah. So so. 

Toby Walsh: So so! 

Andrew Ng: I'm a mediocre drone pilot, I think. Yeah. 

Toby Walsh: Don't tell me the algorithms are better than you, now. 

Andrew Ng: Who?

Toby Walsh: The algorithms.

Andrew Ng: Oh, for many years the algorithms have been way better than me.

Toby Walsh: Actually, well actually, the algorithms… I mean, if there was that nature paper recently where, flying the speed drones in Switzerland, the algorithms are better than humans, right? Yeah. We can’t actually fly them as fast as that. So all of us are beaten by the algorithms. Let's, let's pull back even further, and talk about AGI. When are we going to succeed? 

Andrew Ng: Yeah. Oh, sorry, let me get to that. There's one thing I want to say about entrepreneurial activities. So I know UNSW is a very technical organisation. Not everyone, I know you have a great arts facility and so on as well. But there’s one impact of GenAI that may be underestimated, which is, hopefully many of you will have used, you know, ChatGPT, or Bard, or something, as a consumer too. I think it's fantastic, consumer tool. And I use oh, various of them, as a thought ponder, and it makes me more productive. So I think people broadly understand the use of large language models as a consumer tool. I think many people still underestimate its impact as a developer tool, because there are a lot of applications that used to take me 6 to 12 months to write, only unstructured data, like text processing, not structured data. But, like building a sensory classifier, you know, would have taken me 6 to 12 months to build a production, great, robust, text processing thing. Any of you, if you know a little bit of Python and a little bit of prompting, can call an OpenAI or a Google PaLM, OWS API, and build an AI system for doing certain text processing operations. And you really do that in a week. That used to take me and very good AI teams like, six months to build. And this dramatic lowering of the barrier to entry means that there'll be a flourishing of a lot of AI applications. In terms of building the applications, if you find the right use case, you know, I now see two person teams building prototypes and getting customers and going to market to do stuff that previously would have taken a dozen engineers six months to build, and that’s very exciting, in terms of opening up to the opportunities for entrepreneurs.

Toby Walsh: And also just, I mean, writing, you know, boilerplate code, I mean just writing code.

Andrew Ng: Yes. Yeah. 

Toby Walsh: I mean, most of the time I'm writing a function, I'm thinking, someone's written this before, if only I knew where to look it up, or if I write the prompt…

Andrew Ng: Yeah. 

Toby Walsh: It's a straightforward function that could be written for me.

Andrew Ng: Yeah, I think so. Yeah. I think yeah, when I prompt LM’s to write code, you know, half the time, maybe three quarters of the time it’s buggy. I think it’s still… some people ask, is it still worth learning to code? I think yes, at least for the near future. You know, the code generated by LM’s is buggy enough, or the deeper things LM’s have a hard time figuring out. But I find that where I think we’ll end up in the next few years, is, a lot of developers will use an LM almost as a pair programmer. My friend Lawrence Moroney, who's leading AI [indecipherable] at Google, uses an analogy that LM can be a thought partner, to help you write some code that you should double check. Turns out it’s really really good at writing commons, writing documentation, as well. But there are a few things where becomes a partner that makes you much more productive than learning to code in Python, or something. I find that… I think now and for the near future, people that know a little bit of coding, plus how to prompt an LM will be able to do much more than people that only know how the prompt an LM.

Toby Walsh: Ok, I want to shift the focus again now, to talk about education, and AI, as well as education. But, first of all, I want to thank you for the audience, because I think it's 8 million people you’ve taught AI to this day?

Andrew Ng: Yes.

Toby Walsh: That’s remarkable. More than anyone else on the planet. What have you learned from teaching 8 million people? About AI?

Andrew Ng: You know, I think I've been, I think people tend to give me too much credit. I feel like, you know, I provide, like, a little bit of, you know, educational resources. And over the years I've been thrilled and humbled at the amounts of… the number of people that can take my little contributions, and then build amazing things — right? — from that little start that I maybe helped them. I’m actually quite humbled and thrilled and grateful to the many people you know that do the rote work, which is — right? — to learn online, and they go on to build all these amazing things. And one thing… one thing I'm excited about is, to keep on working to democratise access to AI. And I think that generative AI offers us an even bigger opportunity to do that, because it further lowers the barriers of entry of developers, to learn a little bit of Python, a little bit of prompting and get some AI thing to work. And I think this is important because we live in a world where you know, kind of, everyone has data, big companies of data, small companies of data, even a high school student running a biology experiment, you know, they're recording biology data. So kind of, everyone has data. But as we continue to lower the barrier to entry, to let individuals build AI that works on their own data, I think that will help society create a lot more value. So I think we should, hopefully, all of us collectively, go to teach your friends about AI, help spread the word, encourage them to learn to code, learn to prompt. I think that society's changed in the access everyone has to data, so empowering people to get stuff to work on their own data I think would create a lot of value. 

Toby Walsh: Well, I appreciate your modesty, Andrew. We parked the issue about data just just before, what have you learned from all the data that you've been able to collect? All of the people have been using your courses, is that… I'm not sure, are we profiting enough? Are we data driven enough in the way that people are learning yet?

Andrew Ng: I think that…

Toby Walsh: I've asked a good question. 

Andrew Ng: No. Yeah. So it turns out, I feel like most businesses and individuals are far from exploiting the potential of their data. But to be fair, I think humanity learned to count – right? – a long time ago. And I think most businesses have not fully exploited the value of counting either, by AP testing, you know? So I think that we collectively have a long way to go to figure out how to make better use of our data. 

Toby Walsh: And specifically, you know, in education?

Andrew Ng: Oh, in education, sorry. I think we're a long ways to go. And it turns out education is one of the less digital businesses, one of the less digital types of human activity. If you look broadly across the economy, some types of activities tend to be more digital. For example, you know the tech sector well, clearly that’s digital, but financial services is pretty digital, health care, with electronic health records, is pretty digital. Historically, even now, in the education sector, we do a lot of things in real life, and there just isn't a data asset. You know, what exactly was said to the class, or how a student answered a question. So education is actually relatively data poor, compared to other segments of the economy. But hopefully as education becomes more digital, more people working to take advantage of their data… I think there's a long way to go for education.

Toby Walsh: But equally, I mean, it's a very personal activity, right? It's no good to me being told that, well, on average, people learn like this, if it's not the way that I'm going to learn. 

Andrew Ng: I think, you know, personalization, so I think Kahn Academy did a really nice job on Khanmigo and Coursera has released Coursera Coach, which actually was pretty well done, if you tried it. If you asked factual questions about how they think about this thing, it's actually you know, pretty good. But again, I think that level of personalization with, you know, road maps of cyclical improvements over the next period of time, I think we still have a long ways to go. 

Toby Walsh: How then is generative AI going to change education? How is it going to change what Coursera does? 

Andrew Ng: Yeah, so some things that Cousera has talked about… I think Coursera Coach works. It’s really pretty… when I'm taking a class and the questions, it’s actually pretty good and helping me answer my questions. And this seems to be pretty positive feedback from Coursera’s learners on Coursera Coach. Coursera’s been experimenting with course builder, content generation, using AI. I think the whole content generation using the AI sector, not speaking about Coursera, but speaking about your, kind of, the hundreds of efforts, it feels like it’s relatively early, and actually quite challenging to get to work. And then it feels to me that the AI revolution in education has not yet come. I think collectively, you know, Coursera  developed AI running a lot of experiments. For example, do you want avatars to teach, or how much personalization? It actually feels like, I don't feel like we have the right ideas yet. Oh, but there is actually one other thing I spend a lot of time thinking about, which is if you look to, say, introductory programming, I think that in a few years from now coding would be very different than it is today, right? So frankly, we're hanging on with friends from OpenAI, and Google, you know, and I learned to do this myself from hanging out with them. They code in a very different way than any of us used to a year ago. And I don't mean GitHub Copilot. I mean, kind of, you know, prompting GPT-4, right? Or prompting Palm in order to write code. So actually a couple of weeks ago I needed to… I was playing around with some  NLP thing, and there's an API and I did not feel like reading the documentation. So you know, I got GPT-4 to write the initial version, which was buggy. And then I looked up the documentation and I fixed it, right? To just have a workflow. I see my friends in the big tech companies using it, it is very different. So one exciting thing I've seen in education is, don't totally know how coding will be like five years from now, but I see some of my friends in education trying to change curricula to teach, you know, where we maybe think things are going in five years, even though we don't totally know. And for example, some things I'm playing around with, I think, you know, tradition has been to teach print Hello, World! So I think we’re all doing… you'd be surprised how many…. maybe you've experienced this yourself. Turns out we teach print Hello, World! which I've done. So many learners are missing the closed quotes, or something, and will stare at that for 20 minutes and go, my code is the same as the instructors, I don't understand why my code doesn't work. Well it’s missing a closed quotation! It’s really hard for a learner, coding for the first time, to realise they’re missing a closed quotation mark. I think GenAI…

Toby Walsh: I love the square bracket in lists. We closed all the open brackets. 

Andrew Ng: I see, yeah, yeah. I know! How do we invent these program languages? And I think…. hopefully that. And I think we should teach learners that if it doesn’t work, paste your code into an LM and tell it, where is the code buggy? Right? But I think teaching developers really early on how to incorporate the elements in a workflow, not as a crutch, but as an enabler. I think, how to figure that out. I see those shifts in curriculum, and this is just computer science. Imagine if you have a business class on marketing, or a mechanical engineering class on some simulation thing. Can we start to imagine what that workflow would be like five years from now, and start shaping curriculum to teach learners, not how to do their work today, but how they will be doing it a few years from now? I find that very exciting.

Toby Walsh: So we're sitting here in university and universities are very slow moving institutions. How are AI and MOOCs and all of this digital education going to change… how do you see the positive way that they're going to change the university? 

Andrew Ng: You know, so, you know, I don't know the specific answer to that, but let me share one lesson I’ve learned. And this is useful for universities, but also maybe for any of you that have entrepreneurial or even some maybe academic ambitions, which is; my team has worked with many large organisations, let’s say big companies or businesses, to look systematically at what all the people in the business do. And it turns out that even though we read in the news about AI automating jobs, AI doesn't automate jobs, it automates tasks. So my teams’ workflow is, I learned this from my Erik Brynjolfsson, and you know, Daniel Rock, and Tom Mitchell and Andrew McAfee, is, if you go to, say, a university or large business, and look at the jobs of the people in the university, or the business, take the jobs and break down the jobs into tasks, then identify tasks for automation, that often gives opportunities for driving efficiencies. So actually, let me use computing programming as an example. If you say how can the AI help programmers? People’s mindsets think, oh, programmers write code, let’s just use AI to write code, that's fine, nothing wrong with that. But if you look at the task the programmer does, programmers write code, they write documentation, they debug code. Some programmers end up helping with customer support. Programmers often end up gathering requirements and on and on and on. They are online job databases or employment databases that take many job roles and break it down into maybe, you know, 10 to 20 tasks. And if you analyse it, at task by task level, it turns out, AI is a very good writing documentation. I think even better at writing documentation than writing code, at this moment in time. You often spot automation opportunities that then transforms the workflow of an organisation. I've not done this exercise myself, but I'm actually curious, if you look at the different roles in the university, from the faculty, to staff, to various administrators. What are the tasks they do, and where can AI augment and automate some of the tasks to help individuals be more efficient? I think that would be exciting to work through in the academic context. 

Toby Walsh: Okay. We're going to go to your questions very, very shortly. I promised Andrew, I asked him in advance, we’re going to have a rapid question round, now. You're only allowed to answer one or two words each question, two minute rapid question! You have to say the first thing that comes to mind. Okay? First computer?

Andrew Ng: I owned an Atari something or other, way back. 

Toby Walsh: First computer language?

Andrew Ng:  First computer language? BASIC.

Toby Walsh: Favourite computer language?

Andrew Ng: Oh, Python, sorry. 

*Audience applause*

Toby Walsh: Vi or Emacs?

Andrew Ng: What? Vi? 

Toby Walsh: Okay, perfect happiness for you?

Andrew Ng: Oh, no. Writing code. 

Toby Walsh: Greatest fear?

Andrew Ng: Right now? Really worried about bad regulation. Stifling. There’s amazing stuff that we could be doing with AI. 

Toby Walsh: Traits you most deplore in yourself?

Andrew Ng: Probably, my friends think I'm going to kill myself, I'm working too hard, so I probably shouldn’t do that.

Toby Walsh: What talent would you like to have? 

Andrew Ng: I wish I was better at coding. Sorry.

Toby Walsh: Okay, two more questions, and then will be questions from you, the audience. Greatest regrets? I promise you. The last question’s easy.

Andrew Ng: I wish I'd done more faster, and earlier. 

Toby Walsh: You’ve done quite a lot. Now, so to understand the last question. You have to now understand that Andrew was originally British, so, Marmite or Vegemite?

Andrew Ng: I can't get Vegemite in the US, it's not my fault, you know. 

Toby Walsh: Okay, Thank you. 

Andrew Ng: Thank you. 

*Audience applause*

Toby Walsh: And now the questions are coming from you. 

Audience Question: All right, there are lots of questions. So let me just start, as the AI models, especially LM become more black box. What is your advice for young students, who wants to do research in this area? How can they get started?

Andrew Ng: It turns out that one of the challenges of academia is, you think, boy, if I don't have $50 million, how do I do LM research, it’s so capital intensive to train them. But maybe two things. One, because of scaling laws, innovations, at a small scale, often actually translate surprisingly well to the large scale. In fact, I don't know if any of you heard of the FlashAttention research paper, from Chris Re and some folks at Stanford. But this gave a significant boost to the efficiency of training LMs, and was basically a smart PhD student that spent a lot of time thinking long and deeply and hard about how to reorganise the data flows, and how you train the transformer network. And once the paper was published, all the large companies and so on adopted this. So this was really done at a small scale at Stanford, that wound up having a really huge impact. So that's one, if you're going to do research. And also, it turns out, fine tuning is pretty cheap. So work like Alpaca and [indecipherable] from Stanford and Berkeley, was really done at $100 level of resources as available to university. And on top of LMs, I think there's a lot of opportunities to build applications as well. So, I don't know, some of my students were recently looking at applying to reading electronic health records, we actually couldn't use cloud, because of patient information, which is run on separate servers. And, so there’s a lot of work. So, I do see the media attention, the media environment, distorts things. So big tech companies have huge PR budgets, so work done by big tech companies often just get disproportionate attention, and just use people's perception of what can or cannot be done. Where if you go to a conference like [indecipherable] there’s so much work done in universities as well. 

Toby Walsh: I was interviewing Peter Norvig here at UNSW, and he said, the next breakthrough is not going to happen at Google. It's going to happen at some random university that you haven't heard of. 

Andrew Ng: I don’t know, Google is very good, but universities are too. 

Audience Question: Great. So I'll take a question that I think is coming from industry. So, what are the most important trust and safety areas for the current generation of AI products? And can you give an example of AI companies that are doing safety at best.

Andrew Ng: So let's see. I think that, boy. So trust and safety is a very broad topic because there are many ways that AI broadly, not just LMs, can go wrong. You know, I think this stereotype of AI teams being mavericks, they just do whatever, is really not true. My friends at the big tech companies, and the startups, all generally take the responsibility of AI very seriously. So I've been parts of groups where before launching a product we would sit down, and we will brainstorm all the ways it could go wrong, and invite in diverse stakeholders and take that exercise really seriously, and then, you know, kind of, do our best to mitigate, have backup plans to roll back, and I think that if you prompted LM now, you know, is actually getting more and more difficult to get it to say something it should not. Right? It’s still possible, but it takes many more tries than it used to six months ago. So I think LM’s are actually much safer than they used to be. Yeah. 

Toby Walsh: But what about the way that they're going to be used for misinformation first? 

Andrew Ng: Yeah. So in terms of misinformation, there was really nice work out of Princeton showing that the bulk of that for misinformation is not creation, but dissemination. So you can get a human to write a bunch of false things, but how you get a lot of people to pay attention to that false thing has been surprisingly difficult. Now, I did wonder if we personalised misinformation to influence people at a one on one level, I'm a little bit worried about that. But interestingly, even with the rise of GPT-3, right, where 2020 is using misinformation, it doesn't seem to have taken off. So that made me more optimistic about the risk from misinformation. But precise misinformation is one thing I worry about. 

Toby Walsh: But what about when it's videos, deep fakes?

Andrew Ng: Yeah, it’ll be interesting to see what happens. We've had Photoshop for a long time, and you know, it's painful to generate fake media. It's becoming easier to generate fake media, and maybe look at the future, is the bottleneck at creation or is it dissemination? It turns out, if you… clearly there are, you know, nation state actors, with all the resources, use computer graphics to generate incredibly realistic, whatever they want. But actually, for example, after the Russian invasion of Ukraine, there were some fake videos of President Zelenskyy. But I think that they were quite rapidly debunked. So, we'll see. This is… I think, precise misinformation I worry more about, actually, that may be harder to defend against, but fortunately it doesn't seem to have taken off yet. 

Audience Question: So I think… there's a couple of questions in relation to collaboration and competition. What do you think should be the right approach? Because they're starting to have bans on GPU’s and things like that. And I wonder what, you know… and a couple of questions are to do with, should we have a more competitive spirit or collaborative spirit, in building this super intelligent AI? 

Andrew Ng: Wow there was a lot in that question. So I like democracy. I really like democracy. And I would love for there to be lots of collaboration among, you know, nations with a shared appreciation for civil rights and human rights. And so I think democracies working together to collaborate on advancing basic knowledge in AI and applications in AI, seems like a very healthy thing to me. Yeah.

Toby Walsh: Can I… can I put you on the spot? Because you did… you were the chief scientist of Baidu, right? You mentioned democracy, which is all great. That's left out half the world, right?

Andrew Ng: I know that America running around promoting democracy has often been received poorly by many countries. And I know and actually, most of the world, most people alive today do not live in well-run democracies. Right? So democracy is just a small sliver of where… was it Winston Churchill that said something like; ‘Democracy is the worst possible way to run a country, except for all of the alternatives’. So, I think it's worth standing up for. 

Audience Question: So is there any specific insight, insider experience that you could share in running the tag team between Baidu and Google, for example? Is there a very different trajectory in the way they approach AI development. 

Andrew Ng: You know, I think the US and China cultures are very different. I feel like for a long time, and even today, I think basic research is ahead in the US. I feel that the China tech ecosystem, you know, it's very entrepreneurial, very rapid innovations, and has built up applications very quickly. And then on the flip side, I think China, kind of, missed the generative AI revolution, but is rapidly catching up now. So in fact, it's actually quite strange, I think at this moment in time, what I'm seeing is that skill set in deep tech and generative AI is very concentrated in one place on earth, which is Silicon Valley. I think that's because a lot of the early work was done by two teams, Google Brain and OpenAI, and sometimes people left those teams and started other companies, so it's very concentrated, just in Silicon Valley. Frankly, even when... so when I'm in Silicon Valley, like on a weekend, getting together with friends, we chat about GenAI and what's your company doing? What's the road map? Even when I'm in, say, Seattle, which is a great, great city, I love spending time there, I don't have as many people to talk to about generative AI, even in Seattle. But what I'm seeing across the global landscape, is all around the world, schools are developing, on top of GenAI tools, building the application layer. That knowledge is very widespread. So I was in Vietnam earlier this week, I was in Korea before that, I was in Japan a few weeks ago, and I think that know how of building applications on top is actually being disseminated very quickly, but the know how of how to the train the large language model, is still more concentrated. But I'm sure it will disseminate quickly.

Toby Walsh: I believe there's a Coursera course for that.

Andrew Ng: Yes. More on the applications. I think we need to do more to disseminate know how and how they train an LM. Yes, we have stuff on how to pre-train an LM. Oh sorry, how to fine tune, you know, but not yet that much on how to pre-train it. Working on it. 

Audience Question: There is a question in relation to AGI. So, what's your thought on, you know, should we actually build a superintelligence system that's safe, but also helpful? And if you agree to do this, how should we do it? 

Andrew Ng: So I would love to get to AGI within our lifetimes. I think it is a fantastic goal. I think it’s… so, just to make sure we’re on the same page, right? The most widely accepted definition of AGI is, AI that could do any intellectual tasks that a human can. And I think we’re still many years, I'm going to say, like, decades away from that. And one interesting thing I realised, which is, there's some companies that say, ‘Oh, we'll get to AGI in three to five years’. But when I look more carefully, I found that most of those companies have changed the definition of AGI, and if you lower the bar enough, then of course you could totally get there in a few years.  In fact, I show, you know, an economist friend, one of the definitions of AGI that one of these companies had, and my economist friend said, well, if that's their definition, then we had AGI 30 years ago. And so… but in terms of why I'm excited about AGI, I think I don't know how to get to AGI, but maybe we are starting to see the beginnings of how we might get there in a few decades, but this is why I am excited about it. So I think that intelligence is a power to apply skills and knowledge to make better decisions. And as a society we invested years of our lives and trillions dollars on education, all in service, of helping us learn to make better decisions. So it's very expensive to feed, educate and train a wise human being. And that's why in society today, only the very wealthy can afford to hire huge amounts of intelligence, because it's expensive to hire humans, like a specialist doctor to carefully examine and advise you in medical condition, or a tutor to take the time to truly understand your child and gently coach them where they need the most help. But whereas human intelligence is expensive, I think artificial intelligence can be made cheap, and AI if we get to AGI, or even current forms of AI, I think it opens up the potential for everyone to hire intelligence cheaply. And this means that you better, you know, not worry about that huge bill for seeing a doctor. I guess maybe in Australia you don't worry about it anyway, but we do in the US. Or for getting an education. And you’d be able to hire an army of smart, well-informed staff to help you think things through. So I'd love to build that future. 

Audience Question: So there's the most voted question is, from Ken Chan who asks, what are some of the research areas in AI that young researchers should focus on? So I think the question is to do with, not just the low hanging fruit, but, you know, the future longitudinal questions that we should look at.

Andrew Ng: Yeah, so, AI is now a big enough field, there are so many things one could get excited about. Because we're prioritising simultaneously in many different directions. So, Toby mentioned reinforcement learning, I’m excited about that. I think we've seen the text processing revolution, I think the image processing revolution is just coming, right? Vision transformers was published three years after text transformers. And I think that using large pre-trained image transformers, LV’s, large vision models, is just on the cusp of working. So that basic research, as well as applications can be built on top of that. I'm actually pretty bullish on HAI, and okay, this maybe sounds strange. A lot of AI’s on the cloud, but for privacy reasons… so, I routinely run a 13 billion parameter LM on my laptop. When I was on the plane ride over, I didn't have wi-fi, but I had some questions. I think I was running Vicuna 30 billion on my laptop, you know. So, there’s a lot of things we could do on the edge, like on your laptop, do research on that is exciting. There's a Wild West area that I’m excited about as well, which is agents. So today you can prompt an LM, and it’ll do what you tell it to. But agents is this concept of having the LM figure out the steps by itself and execute on the steps. So if I were to say, help me find competitors to this company, you know. The agents' concept would be, to the LM reason, how do you do competitive research? And say, well, do a web search for the top competitors and visit competitors homepages and summarise these web pages. So the LM will decide that. Then again, make it submit content again to those tasks. So that's an agent. And right now with autoGPT and babyGPT, there are some amazing demos that frankly, none of it actually works. At least I can't get them to work on any real applications, but it would be really exciting if we can figure that out. So, I think research on that would be exciting as well.

Toby Walsh: Which I think is good because, some of those, one of those demos was how to destroy the planet.

Andrew Ng: I see, I see, yeah.

Toby Walsh: It's good that one didn’t work. 

Andrew Ng: Actually, like a couple of months ago, I was annoyed at the AI human extinction thing, and after GPT-4 released the two new APIs, I tried to get GPT-4 to kill us all and I failed. But if you give GPT-4 a tool to wipe out hum… LMs are actually safer than most people think. I should… I'm sure there's a way to do it, I just didn't have time. But I think that… LM’s are actually safer than most people think. So you give it a tool that, you know, if things could be very damaging, it’s actually not that trivial, to get it to do something bad, there is a way to do it, I just didn't have time to figure it out. But LMs are actually much safer than most people think.

Audience Question: So actually, UNSW is a strong proponent of open source movement, thanks to the legacy of John Lyons. So what are your thoughts about open source? You know, of course with some LM models that are not open source. Do you think more companies and students should just go and use the open source ones, or use the pre-trained large scale ones? What do you think? 

Andrew Ng: Yeah, So I think we're evolving toward a world where there'll be a spectrum of models and tasks. So, you know, for the tasks requiring really complex reasoning, GPT-4, seems to work really well. We'll see what… Google is still working on Gemini. Hopefully that’ll be released in a few months. We'll see what happens with Gemini as well. But it turns out that if you need to check your grammar – right? – you don't need a 175 billion or a trillion parameter model that knows everything about, you know, physics and history and philosophy and all these things. It's just a grammar checker. So I think there are a lot smaller models that work fine for other tasks. And just as today, you know, we have very beefy computers, and we have our laptops and mobile devices and smartwatches, to do very different tasks. I think we are evolving toward a world where we have different sizes of LMs, and then we can pick the right ones for the very task. So I think there'll be plenty of workloads in that open source, you know, 3 billion or 13 billion parameter model will be just fine for. But then I think for a long time they'll also be tasks that you really want that 175 billion or bigger model, that maybe is closed source for a while longer to do that. 

Audience Question: There’s a nice question about yourself. So if you're to tell your younger self, about 20 years ago, what would you tell that young Andrew Ng? 

Andrew Ng: What was I doing 20 years ago? I probably would have told myself, you know, don’t worry you’ll work out fine. I don't know. 

Audience Question: Another interesting question. Are you going to teach at UNSW any time soon?

Andrew Ng: I see. I see.

*Audience applause* 

Toby Walsh: Sonia, our head of school was just telling me, we’re actually interviewing today.

Andrew Ng: I see. 

Toby Walsh: For education focus positions. I think you might be well qualified. 

Andrew Ng: Thank you. Thank you. Yes. It would be my privilege to do more but my kids probably want me back in the US at some point. But, it would be a privilege to find ways to collaborate more with UNSW and Australia. Hopefully… I would love more excuses to visit Australia. So, yeah.

Toby Walsh: Well, you're welcome back anytime.

Audience Question: I do have one final question for you. I know you have young children as well, at home, so when they're old enough to start coding or prompting, which one would you teach first, prompting or coding?

Andrew Ng: So, by the way, thanks for all these wonderful questions, everyone that asked questions, whether we got to them or not. Thank you for that.

Audience Question: That’s my own question as well. 

Andrew Ng: Oh, oh, yeah, well thank you, I’m grateful. I think prompting first and shortly after coding. I think prompting is just a very low barrier to entry. Yeah, worth learning. But then after that, I find that people that know coding and prompting can do a lot more than people that only know prompting. 

Audience Question: Thank you very much. I think we are at time, so let us thank Andrew.

Andrew Ng: Thank you, thank you. 

UNSW Centre for Ideas: Thanks for listening. For more information visit centreforideas.com, and don't forget to subscribe wherever you get your podcasts.

Speakers
Headshot of Andrew Ng

Andrew Ng

Andrew Ng is the Managing General Partner at AI Fund, Founder of DeepLearning.AI, and an Adjunct Professor at Stanford University.

As a pioneer both in machine learning and online education, Dr. Ng has changed countless lives through his work in AI and has taught over 8 million people AI. He was the founding lead of the Google Brain team, which led Google’s drive to adopt modern AI, as well as the VP and Chief Scientist at Baidu.

He is also Co-founder and Chairman of Coursera, a leading online education company. In 2023, he was named to the Time100 AI list of the most influential AI persons in the world. Dr. Ng now focuses his time primarily on his entrepreneurial ventures, looking for the best ways to accelerate responsible AI practices in the larger global economy. Dr. Ng has authored over 200 papers in AI and related fields and holds a B.Sc. from CMU, M.Sc. from MIT, and PhD from UC Berkeley.

Follow Dr.Ng on Twitter (@AndrewYNg) and LinkedIn.