Skip to main content
Scroll For More
Listen

Creative Conversations

speakers from the Creative Conversation event standing in front of a pink wall

In this first conversation, Creative Disruptions, Vince Frost of Frost*collective is joined by neuroscientist and founder of Future Minds Lab Professor Joel Pearson, intellectual property specialist and Director of Simpsons Jules Munro and Kartini Ludwig Director and founder of digital design and innovation studio Kopi Su. The panel unpack current developments in AI technology exploring what challenges these developments pose for creative practitioners. 

Presented by the Innovation Hub, UNSW Arts, Design & Architecture

Transcript

Claire Annesly: Good evening, everybody, and welcome here to Paddington, to the School of Art and Design, which is our super cool and experimental art school here in the Faculty of Arts, Design and Architecture. It's home to fine arts, media arts, design.
We're meeting here on the traditional lands of the Gadigal people of the Eora nation. And I would like to take a moment to pay my respects, our respects, to Elders past and present, and extend that respect to all Aboriginal and Torres Strait Islander people who are with us here tonight.

Tonight's event is the first in a series of Creative Conversations that we're collaborating on with Vince Frost and the Frost Collective. The idea behind these events is to create a space where industry and practitioners, and NSW staff and students, and staff and students from across universities in Sydney, can come together and build community and discuss topics specifically affecting the creative industries and creative communities.

So our first topic tonight in this series of creative conversations, is AI and the opportunities and challenges that this poses for us in the creative industry, in the creative community.

And this felt like a really great place to start given the absolutely rapid change in this space. And it's something that affects everybody, from students, to freelance creative practitioners, to large creative studios. So it's now my great pleasure to introduce you to our host for this evening, Vince Frost. Vince is the founder, CEO and executive Creative Director of Frost Collective, globally recognized and awarded creative who is passionately committed – like we are – to designing a better world. My sincere thanks to you, Vince, and to all our panellists for joining us tonight, for the expertise, and the insights that you are going to share with us this evening. Vince, over to you. 

Vince Frost: Thank you. Thank you Claire, so much for those beautiful words. And I am… and thank you to Carly and Emma and all your team for helping put this on today. This is kind of… it's always kind of funny to be standing in front of… at a university design school because I struggled so hard to get a degree. I did a B Tech diploma and I couldn't apply to many design schools around the UK, but I didn't get in there to do a degree. Which is kind of ironic to always be standing in front of a uni crowd. It's been a huge honour to do this series. The five talks, the next four coming up over the course of next year, we don't know what they are yet, but stay tuned. We'll work it out as we go. And the same thing with AI. A.I is what we're talking about today. And we brought this panel together some really great minds and great thinkers and very diverse experiences, to help kind of unpack that. I know that's a highly topical conversation. It's changing by the minute. 

As a business, we are a creative organisation, highly strategic, and of course we use technology in everything that we do. A lot of students actually that come through our company, come from UNSW, so we have a really good affiliation with you guys, as well. I don't know how many people here are from UNSW, or how many people are designers, but I'll talk as if that is the case. And obviously we're passionate, as Claire said, we’re passionate about designing a better world, and passionate about doing things that matter, passionate about doing good in the world. So we're highly protective of the world, and we want to ensure that we use design for doing good. I've been going for… I've been a designer - we're just looking at the dates here - my birthday next week, getting incredibly close to 60, one year off, 60, believe it or not. And I think I mentioned last time I did a talk here, I think I was 58, the next week. But – ironically – but in that time, in the 40 years that I've been designing, technology has changed enormously. And obviously when I started out, there were no computers, there were no mobile phones, there was no wi-fi, there literally was pencil and paper.

And it was a really, kind of, long winded process. But, you know, at that time it was very much about the individual, kind of, the craft of design, having the idea, problem solving, thinking of ideas and executing it with your hands, you know, whether you're… most of it was done by hand at that time. I remember ridiculous situations where people would type in the copy for a book on a typewriter and then we would mark it up, fax it to a typesetter across town, and they would type it on in again, and typeset it, and send it back to us in a courier. And it's just a ludicrous situation. So technology is exciting, it's enlightening, It helps us have better lives. It helps us, kind of, be more efficient and more accurate. But there is also a fear factor regarding change. And maybe the younger guys in the audience aren't fearful of this. I remember going to conferences in London at the time when the first Mac came out, and there was a whole conference about hundreds of people, just discussing, is this the end? You know, is this the end of design? And the computer's going to do our work for us. I think, you know, 30 years later, I think we are closer to that. But it wasn't as immediate as people thought at the time. 

So as I said, we're kind of, funnily enough, my mobile phone… I remember working first when I left design school in England. You know, the mobile phone, I don’t have to show you mobile phones. But I remember when I left design school, one of the first things I did, I went to a production company in Soho, and they used to do editing for videos and TV shows and all that. And I remember working, I was fortunate to work on a Quantel Paintbox, at the time, cost £2 million. And I worked on the Dire Straits video Money for Nothing. You know, the one that kind of coloured in. It was one of the first MTV videos actually at the time. And in that computer, even though it was £2 million, is that, it was actually far less powerful than my little iPhone in my pocket. So that's, I think that's interesting how things have changed so dramatically. 

We’re bombarded with messages every day. I mean, some people like that. I mean, I like that. Positive ones. I like that, when it's manageable, I don't.. I get a bit stressed when it's this bombardment of incredible amounts of information coming at you from Instagram, messaging, emails, text messages, whatever it might be. LinkedIn is just incredible, isn't it, how we… there's not one, kind of, information platform that we're currently working on. And I find it, kind of, thinking about for me, what’s challenging right now is knowing what is true and what is false. Knowing what is real and what is someone trying to rip me off. And literally, are you possibly experiencing it too, are getting text messages saying my insurance is due for renewal, or your bank cards gonna be posted, or emails, or things like that. That I find… we try hard to get our spam filters working at the studio, but I'm just like, literally going to guys, you know, ‘Is this real?’ You know, ‘Is this fake?’ You can't even call the bank to ask them because there's no one human there anyways.
Are you guys experiencing that as well? Yeah. So it's kind of scary, but it's also technology that's driving that. AI is heavily involved in that, and everything that you look at… I sit there… you sit there, you know, this has happened to everybody. You sit there having a conversation with someone, having a coffee and you mention, I need to get a new toilet, and then in a second there’s toilets popping up on your phone. You know, like, is that a coincidence, or are they listening in? So they're definitely listening in, and they're definitely using AI to, really connect with you and make that connection in order to make a sale, or transaction, or to scam you or whatever. 

I felt like AI was coming a long, long time ago. But literally, like COVID, COVID happened to us, I didn't think COVID was going to come, some people did. But when it came, it scared the hell out of us and we didn't know what to do. Scary or scary time. No one really knew. The leadership around the world didn't really know what to do. They kind of fumbled through that and I feel, maybe I'm naive, but I feel like a lot of us are having that same situation with AI, even though it's communicated, and a lot of conversations happening, and this is one of them that we're having today. And by no means are we going to have all the answers. So don't expect that, if you’re expecting that, you better go. But we're you know, this is kind of a time for, kind of, conversation, exploring, what it might be, what it might look like and what might creativity look like going forward. And this is changing by the minute. So it's kind of hard to have a really 100%, kind of, fully clear vision on that. 

Just last week, or just this week, in fact, the AI Safety Summit in London, or just outside of London, a major global event hosted by the UK, happened. Attended by Elon Musk, Kamala Harris, and Rishi Sunak, who's a London… or is the Prime Minister, sorry. The summit brought together over 100 officials from 98 countries and representatives from Google, Open AI and Meta, and they discussed the risk of AI, especially the frontier of development, and how they can be mitigated through international coordination, coordinated action. That's literally what came out yesterday. Essentially, the conversation focused on the idea that without controls AI could endanger humanity. Now that's pretty scary, you know? I think there's a lot of benefits that it can do, but to endanger humanity is really scary stuff. And how an intergovernmental panel for scaling AI is needed. And just hours ago, before this chat, a release from Reuters announced that China has agreed to work with the US and EU and other countries, to collectively manage the risk from artificial intelligence. The intent is to chart a safe way forward for rapidly evolving technology. 

So it's my belief that we need creativity to get us out of this mess. I mean, creativity, in fact, is creating AI. Creativity is designing, is making, everything that is not human. And ultimately we want to understand AI to use them for our benefit. One thing I want to talk about and we'll get to unpack in a little while, is around what is real – As I said before, you know – creativity. We already have an issue as a design company, as a creative organisation, in ensuring that what we're doing is our IP, that is original. You know, originality is… it comes with a question around that, we'll talk about that as well. We can't be complacent in the situation, and we can't pretend it's going to blow over and go away.

I'd now love to introduce the panellist for this first creative conversation. Kartini Ludwig is a director and founder of Kopi Su. A digital design and innovation studio based in Sydney. Her mission is to empower a diverse range of artists, by developing projects that give them access to, and control over new technology. Kartini is the driving force behind Sonic Mutations, an AI Music performance series originally commissioned by the Sydney Opera House for the Outlines Festival in July 2023. As a seasoned producer and digital strategist, Kartini has an impressive track record of delivering projects at the intersection of arts, culture and technology. Her other contributions include working on projects for organisations such as Google Creative Lab, Australia Council for the Arts and the Biennale of Sydney.

Jules Munro. Actually, Jules come up. He is the director of Simpsons, the special entertainment and arts IP law firm in Sydney. He has over 21 years experience advising clients in film and television music and information technology industries, and design and branding companies as well. Jules leads Simpsons Corporate and Commercial Music, Philanthropy and Digital Media practices, providing advice across a range of commercial and contentious matters – he's good at contentious matters. These include the acquisition management sale of businesses and intellectual property assets, managing exploiting brands, and reputation infringement matters, and contractual disputes. I've also had the pleasure of having him as my solicitor for our company Frost Collective for the past 15 years and came to know him in a team over the years has been really, really cool. His advice has had serious, positive impact on my business, and thank you for that, Jules. 

Jules Munro: Thank you Vince, that was very nice.

Vince Frost: Joel. Professor Joel Pearson is a psychologist, neuroscientist, internationally recognized leader in human consciousness, researcher and public intellectual, working at the forefront of science innovation. Joel was a National Health and Medical Research Council Fellow and a Professor of Cognitive Neuroscience at the University of New South Wales. He is the Founding Director of Future Minds Lab here at UNSW; a multidisciplinary agile cognitive neuroscience – these are big words, these – research group. It's a world first, hands on, human centred research lab, working on psychology and neuroscience of innovation and entrepreneurship. The future of work, human and AI interactions, applying not just for good mental health of company founders. Joel also heads up Mind X, a boutique company that spun out of the lab in 2016 to apply psychology and cognitive neuroscience to the world of advertising and marketing. I was honoured to have him on my podcast, Designing Life, a little while ago, where we cover everything from creativity to wellbeing and everything in between. Over five episode series. He's got a really cool book coming out in February called Intuition Toolkit. 
And we're going to start with the questions. There's a lot of questions and we haven't rehearsed the questions, so I'm going to start with Jules.

Jules Munro: Come on, come on, Vince, hit me. 

Vince Frost: Don't – well i’d like to –  don’t deflect the question to someone else, right? So this is a really big question for Jules, and for the audience. What is AI, and where is it learning from? 

Jules Munro: Okay, let me frame this up. I am a legal guy, not a tech guy. So for me to answer that question, I'll probably offend a number of the very intelligent technologists in this room. From my perspective, a person who deals with copyright rights, economic value, and so on. I understand this to be… it's an acronym for artificial intelligence. No one knows what intelligence is, we will get into that with Joel, or at least its parameters aren’t defined. But what started out as getting technology to recognize patterns, machine learning, has now lept into a world where networks of computers are forming, essentially neuron networks, and doing ever more complicated calculations and analysing ever more greater amounts of data. And this is leading to a situation where, if you train one of these neural networks on enough data, it can predict what should be… it can take prompts and it could predict what would be said next in a language context, or it can compare a picture, break it down, put it back together, and compare how close they are. So in a sense, it's taking a small part of the human consciousness and intelligence and ramping it up massively in speed, and the amount of information it can handle. It doesn't get tired, it doesn't have morals, it doesn't have ethics, is entirely and potentially ruinously self-sustaining. And that's as good as I can make the explanation from a phenomenological point of view. But if you want to talk about copyright, I’m your man. 

Vince Frost: Oh no, that's very impressive. You said a lot more than I thought you were going to. Thank you for that. Hey, Joel, how do people perceive AI, and what opinions do people generally have about it, at this point in time? 

Joel Pearson: So, yeah, Jules, you said, you used the word consciousness, but there's no actual evidence at the moment, AI consciousness… So we don't, we don't know. We don't have a way to measure consciousness in humans, so… but there's no evidence at all that any of the AI’s are conscious, right? And we don't know if that will ever happen, because we don't have good definitions of what consciousness is in a human. So but circling back to… there’s so many things happening. So, we tend to put human characteristics on things, and so going back decades, there’s interesting psychology experiments, where you'd have, like, a little shape, like a triangle and a square, and a big set of triangles, big. And then it, like, bumps the square, and the square gets bumped along. And you see that just like a few seconds, you think, oh, the triangle is a bully. It’s, you know, scaring the little square. And you're putting these – you're anthropomorphizing – you're putting these human characteristics on a few frames of a little simple animation, right? Now you think of, you know, Chat GPT, where you're seeing all this information, it's writing for you, it's rewriting for you, it's creating pictures, it's doing all the things we know it can do. And of course we're going to think it's intelligent, right? 

And Jules also mentioned, you know, what is intelligence? How do we define it? And I think the way we have defined it for humans is fairly limited, and it doesn't really apply well to AI, right? You know, my phone, my watch, can already do things that I can't do. You could already say, narrowly, it's more intelligent than me. So of course AI’s going to be more intelligent. And then you have people saying, ‘Oh, it's going to be a million times more intelligent than humans.’ And it doesn't really make sense to use those metrics. So you could say, it's probably not that intelligent, it kind of tricks us a little bit. We tend to fall for these things. We see these beautiful sentences, it's writing and it's doing all these things, but it doesn't have thoughts, it doesn't have mind wanderings, it's not conscious. So it's not like a human. So it's probably not, it probably doesn't feel like anything to be, Chat GPT, in other words, right? So there's probably no consciousness there, but we can't help but feel there probably is, right? We fall for little shapes, we're going to fall for Chat GPT. Another question?

Vince Frost: Yeah, no, I was just, because I was just thinking because I was talking to the guys before… I'm like a recovering procrastinator, high functioning, procrastinator, and AI doesn't procrastinate, as far as I can see. It’s instant.

Joel Pearson: That’d be nice. That's one of things I wonder, if, you know, if we ever saw evidence of it daydreaming, and procrastinating and, you know? I mean, some of the image generation things can hallucinate, and this idea of hallucination in the AI output, but I don't think it's the same way that we meander, and we just daydream and we come up with ideas. It doesn't seem to operate in the same way. In other words, it's not a thing that's ongoing. It responds to instructions in a very narrow way

Vince Frost: And Kartini, how does A.I. learn and how does it understand patterns?  

Kartini Ludwig: Yeah. So, I guess also just to take a step back into, I think your description of AI was great, Jules. I even take a step back from that sometimes, and to sort of, break it down to say that, you know, it is a field of study in science, and it encompasses quite a lot of different types of subsets of AI, including robotics, there's something called natural language processing, image recognition on our phones. We already experience AI in our everyday lives across, you know, Spotify and Netflix, and spam filters and things like that. So it's actually been around for a while. It's just that what is really rapidly progressing right now, and what's really popping off, is in and around machine learning, and that's what is, you know, Chat GPT, and all the image generation models at the moment as well. And basically, I guess, you both started to allude to this a little bit, but in and around how it learns from patterns, and it's very good at processing data, but ultimately it's learning from algorithms that we as humans construct and provide instruction and constraints to, and then we feed it data to, you know, recognize different patterns. So in the case of something like Chat GPT, it's essentially just recognizing the next pattern, the next word in a sequence, from all the data it's been fed, which is basically the entire scraping of the internet. So it's just getting really good at predicting the next word, or even the next letter in a word, and then getting better and better at doing that, and learning all the mannerisms of everything that it's been fed.

Vince Frost: Jules, can you talk about closed versus open data sources? 

Jules Munroe: Yes. Well, I always look at that from a kind of a copyright perspective, again. But — layperson's terms — these instances of scientific thought, or exploration, they need data sets to work on. You need a whole lot of information for the program, the algorithm to pass through. And so there's the question, where does that data come from? And we can get into how they even got to practise all this stuff in the first place. But a closed data set is the idea, the concept, may not even be a reality, really Vince, but everything in that data set is protected. It doesn't take in anything from outside the parameters of that data set, are unknown to the people who are using it, and therefore the pattern learning, the recognition is only going to be in that pool of data, which means I could, for example, put all the letters I’ve ever written on Vincent's behalf in a closed dataset, and that would be all that my Chat GP client might look at, for example. Right? It wouldn't scrape the Internet for things. So I think that's essentially the definition of closed datasets. So that's very useful because you can make it specific to what you need it to do. You can get those processes to work on things you are interested in, or need to know. Open dataset I guess, means just anything on the internet. And that raises a lot of issues because, where, who owns it? What are the proprietary parts of it? Is it true? Is it false? You know, the old rubbish in rubbish out, sort of, canard for programming comes into play there. And so that's the thing to remember. One is, I won't say Wild West because that's a misnomer for the situation, but openness is just everything, closed is an intentionally limited set.
 
Vince Frost: You're really good at this. Seriously. Kartini, what are your thoughts on closed and open sets? Can you elaborate a bit on this and perhaps talk about your experience with the Music Mutations project you mentioned before? 

Kartini Ludwig: Yeah, absolutely. I guess in theory, open datasets or, you know, open source projects, are good for communities, for the sake of development and experimentation and creativity, in some ways. My, I guess, gripe with the concept of closed datasets is that there is already, like I guess, a history of data brokers. And you know, also often like this, the big tech companies that have, you know, these amazing, very big datasets, that are closed and potentially ones that we don't have access to. So there is a bit of an issue there, how that was obtained in the first place, and that, kind of well, I know we'll get into shortly, but a lack of transparency about how these datasets were formed and how they're closed. I guess how it sort of relates to what I've been working on and sort of, coming up against is, I have this project, it’s called, Sonic Mutations. It's an AI generative performance project. It was originally commissioned by the Opera House, as was noted before. And really the question at the start of that project was like, can you put on a performance with artists, to do a live… yeah, a live performance with artists working with AI? And we found a way to do that with a program, or an open sourced project called Stable Re-Fusion, which is sort of the, I don't know what you would call it, like the offspring of Stable Diffusion, which is an image based model. So we found a, sort of, roundabout way to use essentially an image based model for audio, and a way for artists to be able to sing in a live little sample of music or, you know, drag in a piece of music, and then remix that in with like a generative output. So, like, you could transform, sort of, the live input into something like the sounds of crows or, into little,  you know, generative piece of your vocal melody, and then work that into your performance. I guess in that process, we kind of really came up against the fact that we were working with a project that was, it's an open source project, also Stable Diffusion is one of the projects that has had some scrutiny around, I guess, scraping the internet to build its model. But at the same time, they're one of the only models that have taken some, I guess, accountability and have allowed artists to opt out of their model, in the next version, which is really an interesting thing to do. But also, you know, really, in my opinion, people should be able to opt in. That should be the future of AI and the way we build models in the future. But what we, I guess, how we tried to take a bit of ownership back was that we found a way to fine tune this model, with the consent of the artists that we were working with. So, you know, by actually working with these artists, empowering them to understand the process, and giving their consent to use their data to contribute, so it works for them. That is, that's sort of where we landed with that. 

Vince Frost: Wow, I'd love to experience that.

Kartini Ludwig: Mmm!

Vince Frost: Joel can you talk to me about your theory on AI, and unconscious learning or intuition? 

Joel Pearson: Yeah. So we can think about AI as like, we're talking about scraping the internet, or absorbing everything or learning. And we've already said it's probably not conscious, right? So we can think of it as unconscious learning, and we do that, right? So, you know, if you're studying art, then every painting, every poem, everything you've read, you’ll remember whether you're aware or you remember it consciously or not, it would be in your brain somewhere, right? It changes your brain structure and it influences what you do, right? When you draw, paint, make music or write. Everything you've absorbed will shape that to some degree, some small degree. So you can think of those two things as very similar. And the way I think of intuition in humans is like that. So it's from productive use of unconscious information, right? So we're absorbing all these things unconsciously from the environment, and we can learn to use that, to utilise it, to make better decisions. And if you think about it, that's kind of what AI’s doing as well. So it's unconscious, it's learning, this ‘black box’ thing that’s set out and it's getting all this data thrown in. It's learning to make better decisions, or take actions, if it's a robot. And so you can kind of think about intuition like a version of artificial intelligence, which is really interesting.
 
Vince Frost: Can you give us your thoughts on humans as a form of AI, and our constant scraping of everything we're exposed to? 

Joel Pearson: So going back to that, sorry, the IP stuff. So if we're using intuition, and we're unconsciously learning from everything, then it's kind of the same thing as scraping IP with AI, right? So we're using millions of different pieces of IP, and we're not attributing that. We're not giving permission for that, sometimes we're going to be aware of that, sometimes we have no idea what even doing that. And so AI is doing that as well. So it's very similar in a sense to what humans are doing. It just feels different when AI's doing it, for some reason, right? We feel like we should be getting attributed when the AI does it, but not if a human does it. If they're doing it intentionally and they're ripping, you know, copying and… sure. But when it's happening unconsciously, which it is all the time, in everything we do, right? Then yeah, we probably don't feel as negative about the humans doing it. Yeah. So it's, yeah, AI can be thought of in a similar way to humans and unconscious scraping.
 
Vince Frost: I guess, in terms of IP and protecting your IP, Jules, how does it work, with, currently, what, prior to this, how much do you have to change something to remove the IP?

Jules Munroe: Vince everyone knows it's just 10%. 

Vince Frost: Is it 10%?

Jules Munroe: No. That is a lie! Anyone who tells you that is not your friend. Well, let's just quickly set the terms of reference up. So copyright and intellectual property, intellectual property, guys, in case you don't know, and your design students, so you should, but I'm going to go through it again. Is economic right in the product of the intellect. And it's got to be expressed in some way. So we have copyrights. So if I create a song, or I write a book, or any copy, or write some code, myself as a human, using my creativity and effort, then no one can copy that, or change it without my consent. That's the basic idea. There are other forms of intellectual property. There's trademarks, where the government gives you a monopoly on a particular shape, or sound, or a colour, or a word. They give you a monopoly for a decade or so, and that's your brand, and you can go out and you can use it, you alone can use it, and you can licence it to people, and if anyone gets too close, you can prosecute them, and so on. And then there's other kinds of intellectual property, such as plant breeders rights, because that's a real science, and the way that you record that is protected, and patents, you know, various technological processes and so on. So that the economy of, you know, the Western civilization has generated this economic right. So that's what we're talking about when we talk about IP. 

Copyright in particular has an interesting genesis and it's nice to be talking about it in a design school, because it arose from the first time authors of literature were able to charge for what they did, that had a monopoly out of copies of their books. This came out in the 1600s because playwrights, and poets, and writers managed to petition court to get a rule that their publishers had a monopoly on these books. So that came from creativity. So we've got to somehow reward the creators' endeavour with some sort of financial exclusivity over what they produce. Go forward, you know, x-hundred years, and we are in a situation where copyright is the recognition of a whole lot of economic rights and outputs; music, visual arts, photographs, films, code, words and so on. So the whole entertainment business is based on the economic rights of copyright. The fact that you even get to know of anyone who makes a living from music, relies on copyright. To some extent, what you find designers like Vince live on is the fact that they can create something that, in their hands, and in their studio is theirs alone, and then sell it to a client. So it's the basis for the business which you're all learning in this place, right? And copyright is an idea. And technology comes along and challenges that constantly. 

So, I'm 51, so I was around when Napster just ripped the music industry apart, end of the world, end of copyright. All these academics running around going, ‘That's it, you're stuffed, it’s over! It's a creative commons! It's all evaporating! Yay!’ And the music industry went nuts and sued all the smart people who were writing the cool file sharing protocol. Away we go. And here we are, more vinyl is sold now, than ever was, the streaming service is a massive business. Musicians are getting a little less from certain things, and more from others. So copyright is playing through all of that. Ways of charging for it, ways of collectively administrating it. So there's a very sophisticated and mature industry in protecting copyright and giving that balance between the public's access to protected things and the rights of creatives, or the people who own the rights, and the creative stuff, to get paid for it. So the issue that arises with AI, are that, they do a, kind of a, from a sort of intuitive sense, they do a kind of a simulacrum form of a creative thing, right? If you put the prompt into the image, AI, it'll just, you know, fart out a picture that has those things in it. And it does this by knowing what sort of images should go with those words, and what composition, and away we go. But what's missing in that process is the human effort and creativity, which is a legal requirement for copyright protection. 

Now, that's a really technical idea, because there’s a case 200 years ago where a football draw was deemed protected by copyright. So that's not very interesting. It's not a poem, but someone had worked out how to organise the information. We were involved in a case a couple of years ago about, there was a challenge to that in the set top box world, a company called IceTV worked out how to predict what the TV schedules would be, and Channel Nine sued them and said, ‘You’re nicking our TV guide’, and our client could prove that they'd worked it out by predicting what it was going to be. So they had put effort in, and therefore their TV guide was their copyright, and Channel Nine's TV guide was its copyright, even though they said very similar things. So anyway, point is, human creativity or some sort of intellectual effort, is the key. And when you feed stimulus data or training data into an AI, it doesn't do anything like that. It just disassembles the image and reassembles something like it. And there's nothing in the middle. And so there's been a number of cases in the States over the last few months, a year or so, saying no copyright in AI produced material. So, what does that mean? If you rely upon vending your designs, or vending your music, or vending your artwork to people? Or if people pay you to give them brands, and corporate ID’s that you can then go out and defend and protect, You’re selling a whole lot of nothing. It's not protectable. So that's one of the big challenges that AI brings to the design world, is if you generated it with, let's say, little to no human supervision, what you output can't be protected under intellectual property principles. So that's a bit of a head scratcher for a lot of folk because they're going, this is great, I'm going to graduate from design school, get a whole lot of processing power and just pump out brands all day, I'll charge two grand a pop, it'll be great. And then the company says, well, you got to promise me this is original, because I don’t want to take this to market and find out that I'm using someone else's brand, or I'm going to be in trouble. You can't really say that, you don't know where the ideas came from, and you certainly can't sell them anything, maybe except your time, to come up with it. But it's not something that they can trust is  original. 

So, before I keep going, or just one capp-er off-er; so, creativity, intuition helps us here in the AI story, because just the legal, quite apart from philosophical, or even emotional standpoint, just to the legal level, if humans don't put their effort into the product, it's not worth anything at an economic level, in the way that all those things have been long understood, and likely to be understood for some time.
 
Vince Frost: Well, and Kartini, how can you use that creative data for a positive, to create positive, kind of, further creative outcomes in music and film writing, etc, and design? 

Kartini Ludwig: From, I guess, from my experience, and a lot of the artists I work with, I think one of the things I've been exploring is, sort of, how we can empower artists, basically, to use, think of, their music or their catalogues as data that they can use, and be compensated for in the same way that, I guess, already humans are doing that, and, you know, that there are data brokers out there, and using that for these things. So why can't we as individuals profit or benefit from that in ourselves? I guess one of the best examples, or you know, one of the most recent examples has been Grimes, Grimes AI, the precursor to that as well was Holly Herndon, who have created, sort of, voice models, voice music models where they've sort of captured the essence of their voice, and I think it's called Elf Tech. I had a, you know, I played around with it a few times, and it's actually very easy to, you know, just sort of sign up and you can sing something in and then you can sing a song like Grimes. But what Grimes has done in that, from training on, you know, on her data and on her voice has said that, for anyone who, you know, basically anyone have at it, you were allowed to use this, and write your own songs, but, you know, you basically have to compensate her back for 50% of royalties, if that's anything, 50% of master royalties, I think, which is a very savvy, savvy move on her behalf. And then she's also going on to release two albums, one album, which will be her own, and one where she will sort of cherry pick any of the artists that use her voice model to create artworks as well for that. So I think that's, I don't think that's necessarily like a sustainable way that, you know, it's just sort of a very early experimental stuff. I don't think that's, you know, every artist, and maybe there'll be a growing trend of that in the next year or two, where every artist creates a voice model of their, of the self and tries to do the same thing. But I don't think it's sustainable long term, but it is the start of something and starting to change, I guess, the perspective of how we can, sort of, enable artists to think about different ways to create revenue using their music, or their catalogues in different ways. Basically. 

Vince Frost: How does that manage? I mean, surely, it feels like a free for all, but, Wild West you mentioned before, but who is going to take responsibility, tracking it down or, you know, people being honest about that. 

Kartini Ludwig: Yeah I don't actually know that but that in particular, you might know, but I know she's an independent artist, and I think she uses TuneCore, which is a distribution platform to help facilitate the project. You might know a bit more?

Jules Munroe: Yes. So it's another function of the mature copyright industry in music. So visual artists don't really have this, but there's a very, very mature collection of businesses, and collective administration organisations. You might have heard of APRA or AMCOS or BMI or ASCAP. They're all acronyms for societies that gather together rights and collectively licence them. They're highly technological, they've got fabulous data, they track with us. I mean, if you just imagine what Spotify knows about where and what you play, they can report that back. They have to licence that content from the rights owners, and the part of that bargain is they give the rights owners data. So music is a highly observed and recorded massive datasets all day long. Plus there's two sets of copyright in music. There's a copyright in the sound recording, and there's a copyright on the music, on the sound recordings. If you imagine a sound recording of someone covering someone else's song, that's the two different copyrights in place. So when Grimes says, I'll take 50% of the master rights, that's because she knows that there's going to be no music copyright in the part of the song that people use to voice to create a new melody on, because the melody was created by an AI, it has no copyright, so that's not going to get any money from the collecting systems for the music publishing side, as it's called. But the sound recording of that is protected by copyright. And so, that will generate some income. So she’s smart about that side of it. So yeah, there's just a very sophisticated tracking, charging and distribution of funds system in the music industry, in particular. A little bit in the software industry. Some of you will have got those letters from the people who act for Adobe saying you've got a cracked version of software in your studio, never never happened to Vince, by the way, but we get a lot of calls because there are businesses that go around and can check data from proprietary software systems. And this is quite prevalent in, you know, image processing and film studios. And they’ll say you've got three cracked copies of this stuff in your studio, and if you don't pay us a large amount of money, we're going to sue you for it, and then negotiate a deal. But they can track the usage. So the software has inside it the coding, that's sufficient to tell mummy what happened. But over in visual arts, there's no such tracking, and also not in the book publishing industry. I presume you can track a digital file of text, or perhaps audio, but it's nowhere near as sophisticated as the music industry. 

Vince Frost: And Joel, what are your thoughts on ethical issues around AI?

Joel Pearson: Yeah, so no surprise, but my concerns around the psychology of AI, and I think the first impact we've had with it already has been through social media and the select, you know, the, you know, Facebook and all the different companies, and it hasn't been that positive. I think there's been some mental health challenges and some negativity caused there. So as more and more layers of AI are being applied, first in social media sort of things, and videos, and synthetic media, that's where I see the ethical things. I kind of, I'm not a fan of, I'm not worried about the killer AI, and the Terminator scenario and, you know, annihilation extinction. So extinction risk versus s- risk, they call it ‘suffering risk’, that's what I think is more an immediate concern. And on top of that, I don't think it's really AI doing it that I worry about, it's AI in the hands of humans, right? So yeah, you have probably heard the saying that, you know, AI won't take your job but a human using AI will take your job. And I think the same thing applies to the ethics and the psychology and mental health around it. That we're going to see, starting in social media and then proliferating through all kinds of synthetic media. We're going to see versions of ourselves doing things online, when it comes to the next US election, we're going to see all kinds of strange, synthetic things. We are going to be manipulated in ways, you know, that Vince was talking about, you know, text messages and phone calls and hearing your sister, brothers, partners, kids voice calling you frantically crying, right? It's all going to be synthetic and made up. And that's just going to be the beginning, right? The content of what these synthetic voices are going to be saying will be real, because it’s going to be pooled from all your data that's out there. And then there'll be video versions, there’ll be a FaceTime video, which will look like someone in your family calling for help. And just think of that happening in a super bespoke way, for every single person on the planet that has a smartphone, simultaneously. right? So it's fully scalable. And that's just the beginning, right? You can think of this, right going /

Vince Frost: That’s not the end? 

Joel Pearson: No, it's the beginning, right? Then we get into relationships, pornography. Like, you just think of all the different ways you already have, what is it? Replica, right? People falling in love with AI Agents and building long term relationships with them. And then when they change the software, people feel like they've lost a loved one, and they're depressed and anxious. So you have a whole thing there, right? So there's just so many layers. And so the ethics around there, is where I think a lot, and I'm kind of, I get upset or annoyed that all we see in the media is interviews with the tech CEOs about the future of AI and what it's going to do, right? And it's like, how would they know, right? They're good at programming and developing, or running the company, doing these things, they don't know about how humans operate, how our vulnerabilities, how we're addictive, you know, how it's going to manipulate us. And so, I've kind of been pushing a call to arms from people in psychology, in behavioural science in neuroscience and all the other fields, to be part of that conversation and almost, yeah, build a new field of applied science, applied psychology, because that is where I think we're most vulnerable. I don't think the killer robots thing is something we have to worry about now at all. It is the stuff that's already happening, and it's going to be, it's going to be young teenagers that are going to bear the brunt of that, unfortunately.

Vince Frost: But don't you think the world, I mean, if that happened to you as an individual, wouldn't that be phenomenally scary, and wouldn't it just stop you from… your own personal identity’s been compromised big time, like if other people think you are saying things or doing things. 

Joel Pearson: Yeah /

Vince Frost: That’s not you.

Joel Pearson: / I think that’s going to happen in the next year or so. We're going to /

Vince Frost: Wow. 

Joel Pearson: You know, I will see versions of me video saying things and doing things that I've never said or done. And then, and the thing is, when it comes to long term memory, you know, you can see someone say something, a video, and I can say, that was a fake, but you can't delete that from your brain, it's there and it's going to influence subtly, how you think of that person, you're less likely to vote for them, or love them, or whatever, trust them. And so, just showing synthetic media to people, changes their behaviour, their likes, in ways that we don't fully understand yet and certainly not at scale. So, yeah, I think identity is going to take a big hit, when you think of the billions of teenagers that are going to see versions of themselves online that aren't them, and are doing things and saying things that they didn't do or say. Yeah, and we don't know what the effect of that is going to be, but I mean, I don't mean, sorry, I'm being so negative and doomsday, and I had promised I wouldn't go too dark. But that's, that's where I'm thinking in terms of ethics, and how do you bring in laws? How do you even think about that? I don't think the systems can keep up with this. So I think we have to push for experts in psychology to work closely inside these companies, and build, sort of, psychological security around there, would be one way to think about it. But also for all of us to figure out ways of having our own psychological security. So we use the idea of cyber security, but this idea of psychological… how can we equip ourselves to not be so vulnerable to these types of manipulation? And what would a training program for that look like, and be like, is what I'm thinking.

Vince Frost: So Kartini, how would you do this? How can you work with this data source, in an ethical way? I mean, not just in music, but more broadly as well?
 
Kartini Ludwig: Yeah, I mean…

Vince Frost: Cause, how do you know, what is ethical? Like anything that belongs to somebody else is… yeah, an issue. 

Kartini Ludwig: It's that's… I guess it's hard to… it's a spicy question. Yeah. The idea of responsible AI has been thrown around a lot recently, and most of the big tech companies working in this space, like, namely probably Open AI and Microsoft have, you know, their own statements about responsible AI, and creating a culture of safety, which I think is really interesting, but they are really arbitrary. And what I think is interesting, that there's a big difference between responsible AI and accountability, and sometimes those get a bit confused in the way that might inform, I guess like ethics retrospectively. But yeah, there's a lot of talk about responsible AI, but accountability and where we're at at the moment and for all the I guess, these models that have been built to date, that have been built without the I guess, like, consent of people's data being used in certain ways, you know, I don't think that's particularly ethical, but we are where we are now, and there's a lot of rapid development because of that, which is, you know, good and exciting and also a bit scary. 

One thing, though, I guess like a beacon of hope, is that… and this is exactly why I have this trusty little notebook here, is because I read in this article in Time magazine, a story about a startup in India called kar… Karya — I just didn't want to butcher that — which is the first ethical data company, which is all about, sort of, enabling people in India to, sort of, bring back languages that are, sort of, marginalised, which also helps contribute to a bit more diversity in data, because as we'll get into soon, there's, you know, a lot of bias in the data we have because the history of our data is very biassed. And then in doing so,, you know, they sell this data onto companies, I think that Microsoft is one of their clients. So there's a bit of hope there. But, you know, they trickle that back down to the rural poor as well. As well as compensating people for their contributions. And it's not necessarily, you know, a full time job for people who are working the space, but it is an income boost. And I think it's you know, they're paying them, sort of like, almost, like, five times the minimum wage in India for them just sort of to speak native language into an app, for a few hours every day, for a certain amount of time, to give them an income boost, which is a great incentive. But yeah, it's just, I'm not sure if this totally answers the question, but, you know, there is hope in that sense, and there is just, sort of, this idea of, what is ethical, is, there are projects out there who are doing the right thing. Which tells me that there are people who are… also haven't done it, aren't taking accountability, I guess. And that all comes back down to this idea of transparency and maybe this idea of responsible AI, is all around consent, control, compensation, and I guess, accountability. Maybe that's what ethical data or ethical AI can be. 

Vince Frost: Do you want to expand on that, Joel? 

Joel Pearson: There's a lot of talk about ethics, and then there's always summits happening in different countries in the leaders’ room. But I just don't see enough of the psychology talk. That seems to be the big missing piece from my point of view. Yeah, I don't see the companies talking about it, or the leaders talking about it. Hopefully it'll start happening soon. I really hope it will. Yeah, because the responsibility should be on the hands of these companies. Their building tools that can be used in ways that have not been possible ever before to do things. So their responsibility falls on their shoulders, right? If they're going to be profiteering from it, then they need to also build out the responsibility ethically around how it can and can't be used, I guess. 

Vince Frost: And what are your thoughts on how AI can be used to improve your mental health?

Joel Pearson: Yeah, so I think, I mean, yeah, it's not all doom and gloom. And so I think we can… if the right combination of the right scientists, working with engineers, we can design AI to be, you know, human centred, human centred design. And it could be something that can, if you invert the way it could be used to, to hurt us and make us addicted to things and manipulate us. You invert all that and it can actually help us and make us better and stronger, less vulnerable. It can reduce our anxiety. So it can be used for good in that way. But it needs to be consciously done. It needs to be based on decisions. And some of the profit models have to be flipped, right? You know, around advertising and controlling attention. And the way the alignment so far has been a bit misaligned, right? So the platforms and the algorithms in the platforms which have AI’s, have been given the task of holding your eyeballs, you know, onto an app in your phone, whether you're happy, sad, depressed, suicidal, it doesn't matter. What matters is that the person stays on platform, whatever it is, for the maximum amount of time. So that's a version of misalignment, like you probably heard people talk about. So to invert that and change the business models and then work that back to the moral, ethical, responsible version of that, yeah. 

Vince Frost: I was just thinking, Jules, should I take our website down? I mean, should, like… we openly as a design company, a lot of organisations, they take great pride in putting their latest work up there. Is that likely then to be scraped and repurposed on you know, by AI?

Jules Munroe: It's a really good question, because it goes to the heart of the dilemma that creatives are facing here. Because, of course, you put your beautiful work up on the website, secure in the knowledge that you're protected by copyright, and or just by the social pressure of plagiarism. 

Joel Pearson: But it would have been scraped already, right? It's too late.

Jules Munroe: Getting to that. So you’ve got it out there /

Vince Frost: Get to the point, Jules! It's banter. 

Jules Munroe: It's gone, and anyway, the people, the young designers who look at your website already absorb that information, that intuitive learning sense anyway. And as I said, the fact that those things have formed now in a dataset, doesn't mean that anyone that's going to be able to challenge you and your amazing team's preeminence, at dealing with the design problems that you solve every day, and are highly regarded for. You know, if you've seen the results of AI design and it presumably will get better, but there isn't /

Vince Frost: Well I think it’s pretty good. 

Jules Munroe: It isn't generating the… it's looking at your stuff and then coming up with something that approximates it, maybe. But is not in any sense a commercial competitor of yours, and nor will it be in the near future, right? Because you and your humans are hired to solve thorny and amazing problems for other bunches of humans. So I don't think you need to worry about it. I think, go loud and be proud. It doesn't matter. It's already gone, A, and you're still in business. And B, what else are you meant to do? Get it out there, set the agenda. Be the amazing design firm. 

Joel Pearson: Jules, can I ask a quick question? Before you said, sorry, Vince, not taking over. /

Vince Frost: He’s got his metre running by the way.

Joel Pearson: /  You said before, to get legal IP protection for something, there has to be human effort put into it, right? So it couldn't be… so what's the minimal amount of human effort… so if I pull something off AI, what's the minimum amount of effort I put in, bang I get IP. 

Jules Munroe: Great question. So this is super important for design people, right? Of course you should use AI to germinate ideas. Of course you should throw stuff in and see what comes out. But unless and until you use intelligence, human, innate, intuitive, lateral intelligence to transform what you saw there into something, you're not going to be, there's not going to be original in a legal sense, and it won't be original in a cultural sense either. So what is the bare minimum? That is, there's no bright line. So design studios I'm talking to, I encourage them to document your workflow. Like, show what bits you did to fire up the team talking. What were your parameters that you put into the AI? What came out? Then, what did your team say about it? What was the next stage of the development? Then you can show the necessary amount of input, if you ever had to. But that's why people go to court, to find out how original something even is. So recently there was an extraordinary, and I frankly say, very bad case, where — it was a music case — where a US EDM duo did a song that got into an ad, and so that attracted some interest. But the vocalist started it with love is in the air. She just said love is in the air. And the Australian judge said, Oh, there's something about the way that it was said in the Vanda and Young song that's protected by copyright. There isn't. And so that's…

Vince Frost: Can you sing it for us, by the way? 

Jules Munroe: Certainly not, but that's the watermark for originality goes, up and down depending on the most recent case, right? So the degree of effort is, that's arguable. So some, not just the discernment of picking a number of outputs and going, that one, that's not enough. I think you need to take the idea and go, okay, so how can we develop that? And you kind of have to interfere and manage the process with your creative training, to get an output that's defensible. Does that answer your question?

Joel Pearson: Eeeeh, it's in the right direction. 

Jules Munroe: Well, I'll give you the lawyer one. It depends. 

Joel Pearson: Depends. Let's find out. Yeah. 

Vince Frost: We've got some questions from the audience, which I want to share. If crafted prompts are the new creative propositions, or briefs, is there a less or more need for strategy in the creative process? This is from Earl. 

Kartini Ludwig: Prompt engineering. I… yeah it's interesting. I actually noticed recently I looked on Fiverr, for how people are, yeah, actually advertising themselves as prompt engineers now, a real growing thing. And so that's it's… in the same way as, I don't know when this was… in the rise of Instagram, I was working in fashion in New York, and I remember once there was a girl who said, ‘I'm here for social, I'm the social media person’. And I literally, kind of, laughed at that moment, because I was running around like a crazy person to organise, like, guest lists and things. And I was like, ‘That's not a job!’ And now look where we are. So yeah, anyway, I… I think it's interesting, the idea of prompt engineering. I think it could become a job in its own right. But I don't think it means that, it makes strategy or like, the support of other people in a collaborative environment, obsolete by any means. I think there's a lot of conversation that goes around that, or you know, you might have different people that you connect with, or groups… as people find each other as well, to discuss, like, the different ways and approaches that you might prompt an Al model. I don't know if that totally answers the question, but I don't think that it necessarily means it's going to replace strategy or people supporting around that.
 
Vince Frost: In terms of… there’s a really nice environmental question here, what does the panel think of AI using vast amounts of water and energy through data centres and servers? And how can designers consider this when designing? 

Joel Perason: Does anyone know how much they use? I don't know. I don't know the estimates. I know it's going to be a lot, though, right? The data centres and the energy pull on these, I know is substantial. And it's something to think about. 

Kartini Ludwig: I guess it's probably in the same way of, it's probably it can be built into something to do with, like, your responsible approach, and something that we haven't exactly got to is, in around the frameworks that, you know, have not yet been developed, really. You know, this is really prime time. Because things are developing so rapidly, it's not really the right time for, I guess, like full blown regulation, although it doesn't mean it's not the right time for, I guess, you know, the universities, and even, you know, Creative Australia, or something should be working towards creating a responsible AI framework, or or some kind of protocol that can address things in the sense of, you know, how can you be conscious of the particular data service of the model that you might be using? How can you be interrogating that a little bit? You know, is the company involved somehow trying to offset their carbon emissions? And is that what's going to draw you to one model over the other? But those unfortunately don't currently exist. So, it's something we have to work towards. 

Jules Munro: No, and it's victim to greenwashing as well, but it's clearly an issue and everyone's blindly rushing into the utility of it, without thinking of the cost. I really don't know where that would end up because if you try to do an ethical offering, then you have to expose your supply chain. And yeah, it's a very difficult one. 

Joel Pearson: I don't think it'll become… I mean there's like this… like a, there's arms races all the way down, right? So it's, each company is battling it out for market share, as fast as they can. And then each country is also doing a similar thing, and no one wants to… like, when the proposal was to pause everything, right? And everyone kind of laughed and said, yeah, sure, yeah. And no one, you know, because it's like an arms race. And if one company puts their tools down or says, let's slow down and let's wait until we can use all green energy, they're going to fall behind, and they're going to lose market share. And so they're not going to do it. And then the same version applies to the country. And so because of this sort of blind race, that's also why I think, you know, the ethics are more of an issue, because everyone's just rushing, you know, what was the phrase that from Facebook, ‘move fast and break things’, right? And we don't really want to be saying or doing that if we're breaking our youth, right? The minds of our youth. Yeah. 

Audience Question One: To preface my question, as well, as an emerging designer, the reason why I don't feel threatened by the use of AI, is because my current understanding is that it's only capable of inductive and deductive reasoning. So like Joel was saying, it can only take something and then remix it. Whereas, like, my interpretation of industry and the value of design, is that we have abductive logic and reasoning, and we create things that don't actually already exist. So my question now is, you kind of mentioned it earlier with, like, the social dreaming, and right now in the horizon, there's no forms of strong AI, like, emerging. So now my question is, in your experience, whether it's within, like, established forms of computing, or even theoretical ones, like, is abductive reasoning within AI possible? Because that's when I think it would really,  like, literally start taking the jobs of the creative industry.

Joel Pearson: I wouldn't rule anything out, including consciousness, with AI, right? I think all the predictions I've seen over the last year and a half, from the people building these things have basically been wrong. So which means, all the… so I'm not gonna make any predictions because it's probably going to be wrong, right? So I would not rule any of that out. I think the creativity we're going to see, you know, from AI generating things, in this very similar way to the way humans generate. But I mean, so with humans, anything you… can you create something that is not based on things you've been exposed to before, right? So whether it's AI or a human, the elements they put together in novel and completely new ways, right? But they're still based on things in the past, that you've seen done, been exposed to, in the same way that all AI's based from data in the past, from datasets. And so on, aren’t both just combining things in novel ways? Do you, do you agree?

Audience Question One: Oh, sorry, could you repeat that? I think I just like phased out. 

Vince Frost: What the hell? She just dozed off. 

Joel Pearson: Sorry.

Vince Frost: Work on your presentation. 

Jules Munroe: It was the drinks, it was the mention of drinks.  

Joel Pearson: I would not rule it out, yeah. I think we've, we've gotten it wrong so much so far, that I don't want to rule anything out with AI. And that's part of the scary stuff.

Vince Frost: But what do you feel about that? Do you feel disheartened by that?

Audience Question One: Genuinely no. Only because I think, like, the current applications of it are used in such, like, static contexts, like, with image generation, like that's an instance where it's like… I think AI isn't capable, currently, to my understanding, of, like, addressing complex issues such as, like, services. So, I'm not quite worried yet, and I hope I don't have to be worried for the lifetime of my career. Hopefully.

Kartini Ludwig: I will just say on that though, I don't know if this totally relates, but we're really, there's nothing really stopping people from creating multimodal systems. Like, you know like there's no reason we can't, yeah, just combine all the different things into one system, so that it can, sort of, you know, reenact those multiple stages, I guess, that maybe you're suggesting. But my total position is always just that I think it is, it should be seen as the assistive tool, and we should be advocating that. And yeah, as that, in every way possible, that they are tools to assist designers, artists, unlock creativity. So yeah, that’s my two cents.

Jules Munroe: Let's change the acronym. Assistive Intelligence. 

Kartini Ludwig: Yeah!

Vince Frost: That's good. We've got to wrap it up, guys. Thank you so much, the panel. It's been a really interesting, kind of, conversation. 

Centre for Ideas: Thanks for listening. For more information, visit centreforideas.com, and don't forget to subscribe wherever you get your podcasts.