Skip to main content
Scroll For More
watch   &   listen   &   read

Who's Afraid of AI?

You can go from something being an idea on a whiteboard to in the hands of six billion people.

Toby Walsh

Asking ChatGPT to do your homework, or having an algorithm decide if you get a job interview are all part of the new normal. The AI revolution has reached a point where we live and work with AI-enabled devices – the line between AI and human can be hard to find. AI expert Toby Walsh (Machines Behaving Badly) and journalist Tracey Spicer (Man-Made) explored what this new world means and discussed some of the big questions around ethics, bias and ownership of AI with Erik Jensen.

This event was presented by the Sydney Writers' Festival and supported by UNSW Sydney. 

Transcript

UNSW Centre for Ideas: Welcome to the UNSW Centre for Ideas podcast – a place to hear ideas from the world's leading thinkers and UNSW Sydney's brightest minds. The panel discussion you are about to hear Who’s Afraid of AI? features UNSW Sydney’s AI expert Professor Toby Walsh, author Erik Jensen and journalist Tracey Spicer, and was recorded live at the 2023 Sydney Writers’ Festival. 

Erik Jensen: Hello, my name is Erik Jensen. I'm the Editor-In-Chief of Schwartz Media and the founding editor of The Saturday Paper. I want to start by acknowledging the traditional owners of this land, the Gadigal people of the Eora nation, and pay my respects to elders past and present and any First Nations people who might be in the room. I wanted to start with a note of reassurance for anyone who is afraid of artificial intelligence. And the reassurance is that I asked ChatGPT to plan this session for me. This is true, and it just wasn't very good. It was like a school essay where someone would skim the chapter titles, and I'd be embarrassed to ask the questions that it offered me. But it did write an introduction for the session, which was quite good. And I will read it now, partly because I also wanted, for the sake of representation, a machine involved in the panel.

So this is ChatGPT's introduction to the session of Who's Afraid of AI?: 

“Welcome, everyone, to this Writer's Festival panel on the topic of AI. Our panellists today are Toby Walsh, author of Machines Behaving Badly: A Threat to the Future of Humanity (which is not the title of his book), and Tracy Spicer, author of Manmade: The Essential Skewed Guide to Gender in the Workplace (which is also not the title of your book).

I'm your moderator, Erik Jensen, a journalist and author based in Australia (which is true, but not something I would say).

Toby Walsh (this is still ChatGPT) is a renowned AI researcher, scientist, and author who has spent his career working to ensure that AI is used for the benefit of humanity. He is currently a Professor of Artificial Intelligence at the University of New South Wales and has published over 400 papers on AI and robotics. In his book, Machines Behaving Badly, Toby explores the potential dangers of AI and offers insights into how we can manage this risk to create a better future.

Tracy Spicer is a well-known Australian journalist, author, and feminist who has been a leading voice in the fight for gender equality in the workplace. She is the founder of Outspoken Women Speakers Agency, which represents women and non-binary people, and is a regular commentator on issues of gender and diversity. In her book Manmade, Tracy sheds light on the pervasive gender biases that exist in the workplace and offers practical solutions for creating a more equal and inclusive work environment.

As for myself, I'm a journalist and author who has written extensively about politics, society, and culture, and the author of Acute Misfortune: The Life and Death of Adam Cullen (got that title right), a biography of the controversial Australian artist which was later adapted into a critically acclaimed film (their words, not mine).

I'm honoured to be moderating this panel and look forward to a stimulating discussion on the intersections of AI and gender in the workplace. With such esteemed panellists, we're sure to have a thought-provoking and insightful conversation. AI can be good for the future and for society.”

Thank you, ChatGPT. 

Toby Walsh: It says different things every time you ask it. I once asked for a biography of myself, but it said I had actually given up AI and started professional poker and won seven million at the World Poker Championship, something I think I would have remembered.

Erik Jensen: Well, I should be clear. That's its second attempt. I did it the first time, and I said, "Could you make that better?". And it did.

Toby, I want to start with a question for you about why you became an AI researcher. 

Toby Walsh: Right. It's nice that you asked me that at the book festival because it's books. It was because I read books. I was a young boy, I was reading Arthur C. Clarke and Isaac Asimov and reading about some more optimistic views. I think it is hard not to be nostalgic when you look back at the past, but I think, back then in the ‘70s and ‘80s, we were more optimistic about the future. I remember watching the Apollo moon missions and people did think that science and technology were going to build us a better future, and I remember reading those books, and I was very lucky that as a very young boy, I got my hands on a computer when I was 13 and started writing and selling computer programs. Actually, when I was 14, and realising I could build that future. I could be someone who could help build the future, the sorts of intelligent robots and smart computers that I was reading about in Isaac Asimov and Arthur C. Clarke. 

AI was something that I could help bring into existence and that I thought at the time, naively maybe, was going to be a good thing. 

Erik Jensen: I was going to ask you if you maintained that optimism, but that would overshadow the rest of the panel. So I'm not going to, that's the kind of surprise we're going to hold for later.
Tracy, you write about this in your book, but what made you have this kind of seven-year journey of wanting to understand AI?

Tracey Spicer: A couple of different things happened in the household one morning while I was making breakfast. My then 11-year-old son turned around and said, “Mum, I want a robot slave”. And I said, “Darling, what are you talking about?”. He said, “Oh, well, I've been watching Cartman on South Park”. We're obviously terrible, appalling parents. And Cartman was ordering around his Amazon Alexa like some kind of colonial overlord in the most shocking language and, as a lifelong feminist, I had an epiphany that the bias of the past, this idea about women and girls being servile, was being built into the machines that would run our futures. 

And over the next couple of years, I saw something very interesting happen to our son and our daughter. Our daughter, who was always stronger in math and science, started to become more interested in the humanities, and our son, who was always very big on English and reading books, started to get into building computers with his friends in class. And all of a sudden, I just saw what the future was going to be like, and the future was looking very, very male and the more I looked into this subject, the future was also looking very, very white. So I went right down the rabbit hole of algorithms being used, whether it's applying for a job, getting a ventilator in the hospital, or wanting to emigrate to another country. Basically, these machines are deciding our futures, and it's not a future that's fair or equal.

Erik Jensen: Both of you begin your books by looking into the people who were there at the very start, the kind of godfathers or founding fathers of AI. And I think, for the arguments made in your book, it's important to understand what they were thinking, the sort of ideological strains that were underpinning this group of people and who they actually were. Toby, I wonder if you could start by telling us where AI came from? 

Toby Walsh: Well, yeah, I mean, it didn't arrive in the last few months, as some might think because it has gained attention, but it goes back at least 70 years. You can trace back the intellectual origins, at least to the ancient Greeks and the invention of logic, and so on but the field of artificial intelligence has a start date. It's very unusual to have a scientific subject or discipline where you can say, “It started in this particular year”, and it started in 1956. The very first conference was held, and John McCarthy, who was an eminent computer scientist at Stanford, came up with the name 'artificial intelligence' to describe the conference. 'Artificial intelligence' didn't exist before that because he came up with the name then and he collected together a quite interesting collection of people. 

You've got to realise the intellectual moment there was that computers were just becoming available, and people were saying, “What can we do with them?”. And he and a group of other people, mostly white males, came together and said, “Well, maybe we could get them to do the sorts of things that humans do”: seeing the world, reasoning about the world, and with robots acting in the world. 
And so, they had the very first conference, and at the time, when they were applying for funding, they said, “We will make significant progress at solving this problem over the course of a summer”, wildly optimistic. They didn't make significant progress. 

The more I've studied it since then, the more respect I've had for the human brain, the more respect I've had for human intelligence, and to realise, you know, to see people... My wife is German, I'm English, to see my daughter learning to speak two languages simultaneously and not mix them up, it's just amazing how well the brain can do, to see the intelligence developing in front of your eyes.
But we are now, I think, you know, we're starting to make real progress. Getting out of the laboratory where I've been working on it for many years, and into people's hands.

Erik Jensen: Tracy, as you started in your book, to try to understand who was there at the very beginning? Were you surprised by how few names there were and who those people were? 

Tracey Spicer: I was infuriated by the story of the founding fathers, for the simple fact that it seems to ignore the many women and people in marginalised communities who are involved in the development of the computing industry and latterly, artificial intelligence and the wonderful distinguished professors. 
Genevieve Bell from the Australian National University says that, “AI was cruel from the start”. Because this Dartmouth conference that Toby just spoke about, was all about money. It was all about corporations. It was all about capitalism. There was never any discussion about how AI could help humanity and equity and the path towards progress. 

So, one of the great joys of writing the book and doing the research was throwing farther back into history and realising that Australia's Indigenous peoples are the world's first scientists, and this new scholarship to suggest that Australia's Indigenous women are actually the world's first coders because weaving is a rudimentary form of coding because it's binary. It's like knitting, there's a warp and a weft, there's knitting and purl in knitting and there's also all these women who have been forgotten for example, the world's first computer programmer, Ada Lovelace, the daughter of Lord Byron, there's Grace Hopper who came up with COBOL, a wonderful computer language so I suppose that's why I started my book like that, because AI was very narrowly described and researched at the start, it was purely just a name. And it was very, very male-centric from the get-go. 

Toby Walsh: It's worse than that because for the first 20 years, the largest funder of AI research was the US military and it was designed for a purpose, which was to replace human soldiers by robot soldiers. That was most of the funding coming for the first 20 years. 

Erik Jensen: And even now, it's only major tech companies that can afford to fund AI developments, with the scale that they... 

Toby Walsh: Yes, I mean, it, at least it's not the US military anymore. Because they don't have deep pockets as the tech companies who are pouring literally billions of dollars into it and that again, is concerning whether it's actually now as an industry being captured by companies, either in China or in California. 

Erik Jensen: And in both your books multiple times at which the book really are warnings about capitalism more than their warnings about technology.

Toby Walsh: Yes, well, I mean, capitalism is really the modern-day corporations, invention of the Industrial Revolution. It was invented so that we could prosper from the invention of the steam engine and then let electrification of our lives.

And to align that, to allow us to take risks, to allow us to exploit those technologies, but I think we're discovering that that there are certain challenges. I mean, climate change is just a classic example of that, right? It's 70 companies that are responsible for half of all global emissions. It's not reducing your own personal individual consumption of fuel, that is the matter of your carbon footprint. It's about persuading those 70 companies to do the right thing, and it's the same about persuading the large tech companies to do the right thing, and they're certainly behaving somewhat more irresponsibly today.

Erik Jensen: Tracy, as you worked on your book, what became your largest fear about AI and what AI might do or be able to do? 

Tracey Spicer: This was really interesting because every expert that I interviewed said the same thing. I started with this fear about it being an existential threat to humanity, and that still lingers around artificial intelligence in my mind. But I think a bigger fear is almost like the microaggressions, the biases, this discrimination that will widen the gap between rich and poor, and create more dislocation in society. 

Everyone in the audience here knows that we've all been going down silos of opinions and information in the last five to 10 years because of the rampant march of technology, and that creates social dislocation. So certainly, we're going through the fourth industrial revolution, in which the racist, misogynistic, and colonial structures of the first revolution are being scaled up. So, I fear greater, more wars.
Everyone I interviewed said they fear there's going to be more warfare, more disagreement in society, less collaboration, more extreme opinions, and less trust in the institutions in society, which is what we've seen as journalists with less trust in the media and government. So yes, there are fears, but they're creeping up on us because we're marinating in this source of bigotry, and we can hardly see it because we're surrounded by it every day.

Erik Jensen: Toby, we are going to talk about the good things from AI, I promise. I know that you've given your whole life to this. 

Toby Walsh: If we're not careful, people in the audience wonder why get up in the morning and continue to work on AI if it's going to be how to destroy humanity today.

Erik Jensen: The fears that you have about AI, though, they're slightly different from Tracy's. 

Toby Walsh: I mean no, I actually share Tracy's fears, which is that, you do hear some people stand up and say the existential risk that robots are going to take over. Quite honestly, robots only do what we tell them to do. And it's the fact that they have the necessary telling them to do the right things and we're telling them to do, and we've already seen, we should have had, you know, a foretaste of this with social media and we got it wrong with social media. We discovered that social media wasn't bringing us together; it was driving us apart. And now we're about to do that again, but on steroids, right? We're going to be able to do that with huge personalisation, with huge amounts of data, with very persuasive doors that show us stuff that's not true. It used to be if you saw an image, a picture of the Pope in a white puffer jacket, you'd say, "Oh, the Pope's got a white puffer jacket". No, the Pope hasn't got a white puffer jacket. Somewhere, someone literally typed in, had the idea, "I want a picture of the Pope in a white puffer jacket" into one of these doors. 

Erik Jensen: And we were all beneficiaries of that, and those of us that saw the picture of the Pope in the white puffer jacket.

Toby Walsh: We were. The person who didn't actually win out of that, sadly, was the Pope. He should have gone out and bought a white puffer jacket. He owned it, won the youth vote on that one.
But that's just an example of what we're going to be able to have. We're going to be able to see pictures of Trump being arrested by the NYPD, again, completely fake ones. But now you start to realise how dangerous this is going to be because it didn't take much more than that to start January the sixth. That is the potential to really corrupt our democracy, to pervert, cause violence.

We're going to have to immunise ourselves to the idea that things we see are not necessarily true anymore, things that we hear are no longer true anymore. And where does that leave us as a society where you can't trust anything you read or anything you see?

Erik Jensen: Tracy, as you worked on the book, you started thinking through how we address these concerns and what needs to be put in place. As you say, these things were in place ahead of social media and becoming as ubiquitous as it is now. Tracy, what do you think needs to be there to properly regulate and make AI safe?

Tracey Spicer: We need changes at both the micro and macro levels, and we can all play a part in this. Personally, I am a glass half full kind of person despite all the doom and gloom and fear. At an individual level, we can do simple things like changing our Siri or Alexa to a male or gender-neutral voice instead of a female voice, because we know the chatbots in the business and finance sector, they've all got male voices. It's only the domestic chatbots that have the female voices. We can catch a Sheba instead of an Uber. We can talk to our kids, friends, and colleagues about bias in artificial intelligence because every great social justice and civil rights movement around the world starts with education. 

It continues with grassroots collective movements. And then it gets the ears of government and business, and they're forced to take action. So once we start that grassroots movement, which has been happening for the past 10 years and exploded into prominence in the last six months, I highly recommend following a wonderful woman called Dr Joy Buolamwini from the Algorithmic Justice League. 

They sound like a bunch of superheroes because they are. There's a documentary called Coded Bias on Netflix that really helps lift the veil on all of these problems. But ultimately, we’ve got to trust our institutions to take action, to regulate and legislate. I know the European Union is close to releasing something almost finalised on their suggested guidelines for governments around the world. But of course, the technology is moving so quickly that regulation is racing to keep pace with it and regulation is incredibly difficult because a lot of these technologies are created in Silicon Valley. And of course, there's this free-market libertarian aspect of Silicon Valley that both Toby and I write about in our book, like Ayn Rand is their God, so we've got to get past that cultural barrier to explain that this is just like when cars came out, they had no seatbelts. It's crazy when you think about it. When we look back on it, this is where AI is now. We need the seatbelts. 

Erik Jensen: Toby, what's your vision for how to regulate and what role the state has in AI and what kind of protections need to be in place?

Toby Walsh: Regulation has a really important role to play, and some of it is not necessarily about coming up with new regulation. I mean, there are a few new things that AI throws up that we haven't had in the past, but a lot of it comes back to basics, like, ‘well, that was just bad behaviour’, and we have existing laws for that. We have existing product liability laws; we have existing privacy laws. And for a long time, I don't think we thought you couldn't and shouldn't regulate the tech space, the digital space. 

You couldn't because somehow it was different, it wasn't physical. It was these bits and companies that crossed international borders, so rules didn't apply across these multinational companies. And that you shouldn't because it was going to stifle innovation. That might have been true at the beginning of the internet, that might have been true in the '90s and the 2000s. But it's not true anymore and I think it's becoming increasingly apparent that you can regulate, and you should regulate. 

Indeed, I have friends who work at these companies, who are nice people, and they will actually say to you quietly over a beer, "We would welcome more regulation because at the moment, it's a race to the bottom. We're doing things not because we choose to do them, but because our competitors are going to do them". And you're seeing trillion-dollar industries being created in front of your eyes and that's what's really distressing. In the last six months, to see this explosion and hype in AI take off and to see companies that had started to behave a little more responsibly and spend the last couple of years being persuaded that they need ethics, they need philosophers and ethics people to help them think carefully through how to do this responsibly.

And then to see them throw caution to the wind, as an example, Google had their version of ChatGPT called Lambda, which came out nearly a year ago and at the time, they did the responsible thing. They said, "We are not giving this to the public because we're concerned it's going to say stuff that's untrue. It's going to cause harm. It's going to damage our reputation as purveyors of information and truth. So we're not going to release it to the public". And they sat on it until OpenAI got in bed with Microsoft and said, "We're going to put this in Bing search". Google became nervous that their competitive advantage would be lost if we all switched from Google to Bing and so they immediately announced on the same day that they were going to release this. As far as I know, they've done nothing to fix all the problems they said were too dangerous to release six months before.

And so they're just backpedalling on their responsible behaviour. You see, they understand why. You understand that it's a trillion-dollar industry being created in front of our eyes. Whether Google exists in six years' time is dependent upon that. Maybe not. Maybe they could easily go out of business if all of us switch to using Microsoft Office or our search then Google's income would completely disappear.

Erik Jensen: Tracy, I'm keen also to ask you about those things in AI that you're excited about, the things where you see possibilities, or where you see that life can be made better by this technology. 

Tracey Spicer: Particularly in medical technology, there's a wonderful app that can tell you whether you're going to get breast cancer in the next five years. I mean, this is phenomenal tech. There are wonderful innovations being used in New York with older people with dementia and Alzheimer’s – a robot that will tell the person a story, and the person will respond to the story. It's actually helping people with their diagnoses. Obviously, in a perfect world, we would like humans to do this, but we know that particularly in a pandemic when they were locked down, this technological solution was valuable. 

There are also tremendous advances in augmented reality, extended reality, all of this kind of stuff – the headsets. Honestly, this was a brave new world for me researching these chapters, but they're using them in diversity and inclusion training in the workplace to teach empathy, to put you in the shoes of someone in a wheelchair or a person of colour. So there are a lot of wonderful things happening. I read two days ago about antibiotics that had been created to stop the equivalent of golden staff tearing through hospitals, and that breakthrough was thanks to AI in med tech. So I don't want everybody to think it's all negative, it’s all doom and gloom. But we need to keep the positives while being able to reduce the negatives.

For example, also in a hospital setting, I'm 55 now, if there's a ventilator there, and a Frankenstein data set has been used to create the algorithm to choose who gets the ventilator on their deathbed, they will give it to someone who's 30, more likely than me, who's over the age of 50. So all of these pieces of discrimination and bias that are being built in, they do pose a threat to humanity and reduce the possibility for us to take all of those positives out of AI, which, I mean, there are tremendous advantages. 

Toby Walsh: That ventilator example is a good example, though of how it's not a new philosophical problem, right? This is a problem that medical professions have faced and have not necessarily solved in an optimal way so they've solved it ad hoc and so on. The technology actually hasn't changed the problem; it's just made it more concrete that we need to think carefully as a society. Well, what are our priorities? Are we actually going to be saying to 55-year-olds that they're less important than 30-year-olds because they have less of their life to live? Maybe we've been doing that implicitly, by the way that the medical profession has made those decisions somewhat informally. The challenge you get when you write computer code is that computers do only what you tell them to do. They do exactly what you tell them to do so now we actually have to face up to the stark, challenging societal choices, philosophical choices, as to what our values are here. 

Erik Jensen: Yeah. And the issue is there's actually a lack of ventilators, just to be clear. The AI is not just choosing to not give elderly people ventilators. 

Toby Walsh: That's true. But when you're facing a pandemic and respiratory disease, there maybe we will never have enough ventilators, and you're going to have to make difficult choices. But computers actually offer you the opportunity, perhaps, to make better choices. 

I mean, at least you're being honest for once about what those choices are, as opposed to what we do at the moment, which is sort of leaving that to the doctors. The doctors have sadly been forced to make those really tough choices for us as a society. So I actually say in my book, you know, this could be the golden age of philosophy because we have to face up to some really tough choices about what sort of society we want it to be because we're trying to code these solutions into computers. And now we can't just pretend and waffle away and say, "Oh, well, the doctor will decide".

Erik Jensen: And that's something that, I guess, you identify in your book as well, Tracy, that what we're confronting is the poor studies of the data, the data set that we're using, or the pre-existing behaviours that we're feeding back into the machine. 

Tracey Spicer: Yes, it's a combination of things. It's historical data sets which, by their very nature, if part of it's taken from the 1980s, every doctor will be a ‘he’ and every nurse will be a ‘she’. And then, of course, there's the bias that's built into the algorithm from the programmer because we all have unconscious bias and one of the people I interviewed, a wonderful expert Ivana Bartoletti, said, "An algorithm is an opinion expressed in code. Technology is never neutral". 

And then, of course, with machine learning, I describe it like a white supremacist going down the rabbit hole of conspiracy theory websites. Everything gets exacerbated and exaggerated through machine learning. So it starts with the data sets, continues through the algorithms, and then through machine learning that's where it's going crazy, particularly with generative AI but I want to touch on something that we were talking about before with the ventilators because another expert I interviewed said, "Look, there actually could be a benefit in the way that we can see the bias and discrimination more clearly when artificial intelligence is involved. It almost leaves a paper trail".

And I think that might be a light at the end of the tunnel. There's bias and discrimination in everyday life. We see it all the time. A lot of it's unconscious from each and every one of us. But if we can follow some kind of paper trail, if we can see that the ventilator has clearly discriminated, maybe there's a place for the law there to prove that discrimination.

Erik Jensen: Toby, I asked Tracy what excited her about AI, I feel like I should ask you the same question. 

Toby Walsh: What excites me is that you're using AI already, and you don't even realise it. I mean, every time you open your smartphone and ask for directions to go somewhere, every time you sit in your car, and the sat nav gets you to where you need to go, that was some artificial intelligence. It's actually one of the oldest AI algorithms that was developed for a robot called Shakey back in the 1970s, when they built this... well, this was a fun little robot. As his name suggests, you sort of moved around a bit like this... well, it wasn't very solid as a robot. But it was the first time that we tried to build, you know, the Hollywood robot, the one with cameras on board, with a computer on board. It was autonomous, it would go off and make its own decisions. And you'd say, "Go to the library and find me a book". And it would have to work out the shortest way to get to the library and get back to where it was. That is... that's an algorithm. It's got a name. It's called A* Search. That algorithm now is in your phone, is in your car. I mean, the funny thing is, it's now not navigating robots. It's navigating humans. 

But that, at least I find, I'm lost a lot less than I used to be. When I was doing the navigating, I used to get terribly lost all the time, right, you know, and be throwing the map over the backseat of the car in disgust at the fact that I've managed to lose myself again. And now I get to where I need to be on time because an AI algorithm is doing that, so our lives are full of these little algorithms that are AI algorithms that are actually making our lives better and, and easier and faster. 

Erik Jensen: And those are the small things. You've... you've also done work where you've developed AI minesweepers, or you've done, you know, substantial things that artificial intelligence can help us with and make things better. 

Toby Walsh: Yes, I mean, we've been working with some very large companies looking at how they can optimise their supply chain. And when I started that project, I remember going to the CEO and saying, at the end of this job, “I don't want to hear you say, or tell me that you're firing 10% of your drivers”. Just don't promise me that you're not going to do that and we saved them 10% of their transport costs. Big, big multinational company has 800 trucks, saved them 10% of their truck transport cost, which was a couple of 100 million dollars they spend on transport. Yes, it was... it was like 30 or 40 million, we were saving them a year. So, the... it's the only job I've ever worked on where the CEO said, “those strange, geeky people do whatever they say”.

And so we saved them a lot of money but what was actually much more rewarding was that all of that money was fuel, where they employed the same number of drivers. They said they did a survey of the drivers needed for the job. Drivers are happier because they got home on time. They weren't having to do to get home late, they got to the places they need to be before they closed.
And they were spending 20 – 30 million dollars a year less on fuel, which was a lot of fuel, but also a lot of CO2 because all of that fuel is CO2.

Erik Jensen: Tracy, as you got towards finishing your book, as we've touched on it several times now, but the last six months has been a rapid acceleration of people's awareness of AI and its ubiquity. Given that you'd been for seven years working away at the book, were you surprised by the sudden rush towards the end? 

Tracey Spicer: I was surprised and relieved when I started writing. I thought no one's gonna read this. No one cares about, then Toby did a lot of work and experts did a lot of work, and now fortunately, it's a talking point but yes, I was madly scrambling through the editing process, shoving anecdotes about ChatGPT left, right, and centre. 

The other thing that we did very quickly was we created the cover using AI. Now, Simon and Schuster have assured me that they paid the cover designer. I have to make that clear and it was his idea. He said, "Look, I want to create the cover using MidJourney, this app, to show people the impact on the creative sector, whether it's words or images". 

Actually, incidentally, how many of you have used an image generator like MidJourney or Stable Diffusion? It is remarkable in how effective it is. You put in prompts, we put in 12 different words, and it came up with this incredible image. Although the first image was not so great because of the biases built into it. I wanted an image of a strong robot woman looking to the future with concern but hope, I was quite specific but it came up with this image of a very sexualised robot woman with a tiny waist and huge breasts and enormous biceps. So obviously, the algorithm reads ‘strong’ as ‘guns’. So a few friends of mine have joked that MidJourney is very much like a 14-year-old boy in the way that he views the world, so we had to tweak the prompts to get a better, more stylised image. But it taught us also what a challenge this is, particularly for the creative industries because there's a copyright issue. All of the images, billions of images scraped to create this image, they're associated with people's copyright and if people can do it easily and cheaply or for free, in a lot of cases, there's really no laws to protect them at this stage. 

Toby Walsh: But there are laws. 

Erik Jensen: Copyright law.

Toby Walsh: Exactly, copyright law. That's where we've been sold a lie again by Silicon Valley. They've scraped copyright. It's not clear that that is actually an acceptable use of copyright and there are class action suits going down in the US on that very issue, right? Because it's not clear you know, this is like the Napster moment, right? Streaming other people's music is violating their copyright, stealing their images, ultimately is perhaps violating their copyright and that has to be decided by the courts. 
Erik Jensen: What is it? 

Toby Walsh: But I think it's also worth pointing out to say how quickly the field is advancing. So, I mean, that's a pretty good image, but if you go back two years, it was pathetic and grainy and fuzzy, and nothing you would write home about. To see the technical advance and then also to see how quickly it's been put into the hands of people. I mean, that's why it's got everyone's attention, is because it's not surprising ChatGPT is the fastest-growing app ever, of any app, and it's faster than Facebook, faster than Instagram, anything. Nothing else has captured users as quickly. It was a million people at the end of the first week, 100 million at the end of the second month. And now it's in the hands of over a billion people and that is why we are, I think, actually in some sense, a unique moment in history, which is we've gone through other technological changes and transformations. When we invented the steam engine, the internal combustion engine, when we electrified our lives, we've changed our world with technology quite radically in the past but all of those revolutions happened slowly. It took 50 odd years for the Industrial Revolution really to take impact. It started in the Northeast of England and slowly spread out around the world. You can go from something that was an idea on a whiteboard to six months later being in the hands of, and being used by a billion people. We've never had technologies before where you've been able to have that footprint so quickly. 

Erik Jensen: And does that mean we're at a kind of tipping point where the questions that are in your book and Tracy's need urgently to be addressed? 

Toby Walsh: They do need urgently to be addressed because even small harms, when multiplied by billions, are significant. And we're already starting to see – we saw two days ago, two or three days ago, the stock market was moved by one of those fake images, a picture of the Pentagon with an explosion, completely fake. But you know, stock market stock prices were moved by it. We're going to see elections change. You know, it's going to... I'm pretty fearful of what's going to happen in the US presidential election because we saw what happened with the misuse of social media. Well, now we've got these tools, everything you see, you have to question. 

Erik Jensen: And that does, though, perhaps swing people back to more discerning engagement with the world, you know, greater use of newspapers. 

Toby Walsh: As an editor of a newspaper…

Erik Jensen: I'm not pushing for newspapers, but there is you know…

Toby Walsh: No but I think it really is the moment. One of the consequences of this is we're going to suddenly realise that social media is, you can't trust anything on social media essentially, unless you know where it comes from. We're gonna have to say, "Well, where do we get news that we trust? Where do we get information we trust?". We get it from, you know, the old-fashioned news sources and encyclopedias, the places that we used to trust. 

Erik Jensen: And that goes to a lot of your thinking, Tracy, it revolves around education, you know, needing to know what we're doing and to be engaged with trying to change minds before people are manipulated by data.

Tracey Spicer: Yeah, what I hope happens from this era is that it ushers in a golden age of critical thinking, because we need it now more than ever before, particularly with our children or the young people in our lives. Even if, you know, there's that expression ‘digital natives,’ yes, but critical thinking goes above that. It's more in the realm of the social sciences. What I hope that we do as a society is sit back a bit and realise we are living through a stage of late-stage capitalism. And unless we start to question the really big structures around us, I mean towards the end of my book, I have a chapter on utopia and a chapter on dystopia. And a lot of the utopian thought in the last couple of hundred years surrounds socialist societies as being a way forward for utopia. What I would say is that we certainly will need a large redistribution of wealth if it keeps going the way it is because big tech is like big tobacco. Some of these companies have profit margins larger than the GDPs of nation-states, and something's got to give. It opens up our discussion about universal basic income and how we can reduce the gap between rich and poor for the future so I think there are opportunities here as well as dangers. 

Toby Walsh: I gotta disagree with the language you used there, which was the redistribution of wealth, which is actually I think we need to push back against the concentration of wealth that we're seeing. It's the fact that we were actually living in a much more equitable society 30 or 40 years ago, and it's that concentration that we've seen over the last 20 or 30 years, a lot of it into the hands of people who have been developing digital technologies. That is what we need to push back. It's not a redistribution. It's like, let's return to the equity that we saw the beginnings of, you know, of the Industrial Revolution, where we saw wealth being spread around a bit more equally. It's only the last 50 years that we've seen wealth really start to concentrate and the equity that we see in society disappear. So even if we just returned to what we had, that will be a good thing. 
Tracey Spicer: And even Bill Gates says that, that we need to go back to the way it was so when you've got this tech billionaire saying, “We need wealth to be distributed more equitably”, you know where you're at a tipping point.

Toby Walsh: I mean, so you go back to one of the first technologists, Henry Ford. He was asked, “what are you paying your workers on the production line? That's so much”. He said, "Well, who's gonna buy the next round of cars?", which was very astute. And similarly, we know we're not getting our societies going to implode if we don't actually spread the wealth a bit more equitably like we used to actually. 

And also to think this isn't the first transformation, industrial transformation we've gone through, and to look back and perhaps look at the lessons we learned from those previous past changes and you think back, when we started the Industrial Revolution, we changed our lives dramatically, right? We used to be, we'd go out and work in the fields, we'd go out and then we found jobs in factories and offices but we changed our society to deal with those changes. It was said that Marx and people were wrong, but the wealth was not concentrated into the hands of just the owners of the factories. We introduced unions to protect the rights of workers, we introduced the welfare state to support people, we introduced universal education to educate people with jobs, we introduced universal pensions in many countries for people so we made some pretty radical structural changes to society to spread the benefits around, to support the rights of workers to go through that transformation. 

And I think we're going for an equally large transformation, possibly even quicker, that's the problem than that time and we need to think radically. And as you say, things like universal basic income, we forget we had the universal basic income through the pandemic, it was called job seeker, job keeper – we paid people to stay home and the economy didn't crash. You know, it was a remarkable idea that people said, "We couldn't possibly afford that". Well, we afforded it, and interestingly enough, despite all the concerns about the pandemic, people's anxiety levels and mental health improved dramatically through that period when people didn't have to worry so much about, you know, their financial situation.

So I think we need to have these important conversations, and they need to be not just journalists and tech people like myself, they need to be broad conversations in all of our society about, well, what sort of society are we going to build with this technology?

Erik Jensen: I'm going to invite questions from the audience. If anyone has a question or wants to come forward, there are microphones at the front of the stage. While people are moving, I do just want to ask Toby, because you've worked in his field for so long, and this question might not be a fair one. But what is the next breakthrough that you expect to see in AI or that you're excited to see?

Toby Walsh: It's worth saying that the systems we've got today, things like ChatGPT, are still remarkably stupid. I've got an example where you can get ChatGPT to fail to count to two, properly. So, they’re very good perhaps, they’re very good at repeating back, reflecting back the sorts of things that you read on the internet because they're trained on the internet. They're literally, you know, a mirror of what you can find. They're very good at saying things that are probable, but they're not very good at reasoning. And so the next breakthrough, we don't know how to do this, is how can we get them to reason like humans, not just be able to…

They can convincingly say things, sometimes it's only a B plus. The introduction was only a B plus, that will get better. But equally, they're terribly poor at reasoning at the moment.
So that's good news for humanity. We can still outthink them. But, you know, we will work on how we can build systems that can not only command language like they do now but also command reasoning.

Erik Jensen: Tracey, is there an AI breakthrough that you're waiting for or expecting before we take questions? 

Tracey Spicer: Nah, I’m happy to take questions, I know we’ve got limited time and I'd love people to have their say and ask what they want to ask. 
Erik Jensen: Let's start over here. 

Audience Member: I want to ask about artificial intelligence and the military and the development of killer robots. What I understand is that the robots will integrate and what they see on the ground and make decisions of who's going to live and who's not going to live. Can you talk a bit more about where their development is at and where the ethics and moral reasoning come in from the military?
Toby Walsh: Well, thank you. Thank you for that question. That's something that does keep me awake at night and I've had the privilege to speak at the United Nations about this topic, warning about the dangers of this topic for half a dozen times now. 

And unfortunately, you look at what's happening in Ukraine, and you see increasingly autonomous drones, you see warfare being transformed in front of our eyes and you can see the military ‘advantages’ – I use scare quotes around that –about the idea of the military. But equally, as the questioner rightly pointed out, machines are making decisions about who lives and who dies, and that's something that they should never be, should never be allowed to do. Machines are not moral. Machines don't have consciousness. Machines cannot be held accountable for their decisions, only humans can be held accountable for their decisions and so, we should avoid waking up in a world where machines have the right to make those sorts of decisions because we know what that world looks like. It's the world that Hollywood has portrayed for us. And it's just going to look like a really bad Hollywood movie, but we don't have to end up in that world. We get to make a choice. There are lots of technologies where we've decided they're just inhumane, terrible ways to fight war, and we don't have to do them. We've decided that with chemical weapons, biological weapons, nuclear weapons to a certain extent, cluster munitions, blinding lasers – you can go on through the list. There's a whole raft of technologies and so there are discussions going on at the UN that I've helped contribute to, but still ongoing, where I encourage you to talk to your political representatives and make your feelings clear on these topics. If they feel enough people are concerned about this, then they will be forced eventually to regulate this. 

Tracey Spicer: One of the lesser-discussed aspects of killer robots was brought up by one of the Australian experts I interviewed, Dr Katrina Wallace. And she said, with the bias embedded in this technology, does this mean that AI bots or drones in war zones will target women, children, and people in marginalised communities? And then there's the flip side of facial recognition technology, which really struggles with people of colour. So if it's a search and rescue mission, say, in Iraq or Afghanistan, and they're using technology, does that mean perhaps some women will be left behind because they're wearing a face covering? So there's a whole bunch of really problematic issues here and we know that AI is being used in warfare today with all of these problems embedded. 

Toby Walsh: And sadly, they will be used against women and children because machines are not going to question orders, right? So previously, if you wanted to do harm, you had to do evil. You had to persuade people to do that. You had to train them, equip them, and persuade them to do your evil. Well, now you just need one programmer, and it doesn't matter how obscene the task, they will carry it out.

Erik Jensen: Question here.

Audience Member: Hi. I was wondering, you guys were talking about the intransigence of tech companies to consider being regulated that they've kind of been resistant to that. Sam Altman, the CEO of OpenAI, gave testimony at the US Congress, saying that he wanted ChatGPT and other kinds of generative AI to be regulated. I have trouble knowing how seriously to take him when he says that and I wondered if you had an assessment of how sincere he was being, or perhaps that was a tactical thing that he said there? 

Tracey Spicer: Well, Sam Altman turned around, I don't know whether you saw it, two days ago, and then followed up his previous statement by saying, "We will not do any business with the European Union if they go through with these regulations". So I just can't believe a word that they say, put simply.

Toby Walsh: Yeah, the advice I was told was that if you're called to testify in front of Congress, they're going to beat you around the head a few times, like they did with Mark Zuckerberg. So you can choose to push back against that, but you're just going to look bad, or you can choose to acquiesce. And at least you'll have an easier time when you're up there but it will be the same net consequence.

Erik Jensen: Question on this side?

Audience Member: In the '50s, a writer called Isaac Asimov set up these laws of robotics that were supposed to prevent an uncontrolled algorithm. Does anybody talk about that anymore?
Toby Walsh: Does anyone talk about Isaac Asimov's laws? No, because, you know, his laws, even when he proposed them, were a fictional device to demonstrate the inadequacy of the laws, right? All his stories were about how you could fall between the three rules about harming people and harm and protecting themselves.

But people do try and formalise the idea of, you know, what are the principles. But the challenge is how you turn that into practice. It's easy, very easy to come up with high-level principles, you know, the famous one of, you know, I. J. Good, “Treat others as you would like to be treated yourself,” that’s a pretty good modus operandi, but the problem is how you turn that into practice. 

Audience Member: Hi there. I was wondering if you could speak a little bit about how you perceive AI is affecting the hiring process. 

Erik Jensen: Tracy, do you want to?

Tracey Spicer: Oh, yes, this is something that makes my head explode because the technology still hasn't been worked out properly and if companies like Amazon can't work it out with all of their billions, then we have huge problems. 

So what happens with these hiring algorithms is obviously they use data from the past and in the past, predominantly, it was men and white people who dominated the workplace. So even though Amazon, in this particular example, and this happened about seven or eight years ago, went back and tried to change the algorithm to remove the bias from it, it still kept throwing out CVs of women, people of colour, people with disabilities because it looked for clues in other parts of the CV. So for example, say the CV didn't have your name and it didn't state your gender, but it had in your interests, ‘I've played in a women's basketball team,’ the algorithm worked out that you were a woman and put your CV to the side. So this is a huge problem in the workplace. And I fear that with machine learning, we're going to end up with workplaces that look a lot like Mad Men, right back in the 1950s.

Toby Walsh: I agree. There’s over 100 startups in Silicon Valley using AI to help scan CVs and help that HR hiring process and I'm just waiting for the class action suits to happen because it's a disaster. Lawyers will make a lot of money in this place, that's all I can tell, as usual.

Audience Member: Thanks for your presentation and thanks for your time. My question is about large language models, just to confirm my understanding of it. And, also, basically, when you input information, say for example, ‘alphabet’, ‘Scott’, ‘Lambdin’, ‘Metis’, ‘got the llama’, and ‘Microsoft's got the ChatGPT’, when you input information and they produce an outcome, who fully owns it? Do you? Do we own it? Or do they own it?

Toby Walsh: Interesting question. Well, what people don't realise, and you should realise it's just the social media playbook again, is that you're improving the algorithm every time you're typing prompts. All of that information is going back and is feeding back into the algorithm, improving it. So every day it gets a bit better and so they typically own the data so which is why if you work in a law firm or something, you've been told, "Don't use these because you will be giving up client-sensitive information to another company”. 

Tracey Spicer: Oh no, it's probably a tangential point. But it reminds me that if you're not paying for the product, you are the product. And a lot of us do an awful lot of unpaid work for the tech giants every time we fill out a survey, every time we play with this technology and I know we need to play with it to keep up with it and to improve it, especially in the case of ChatGPT, but it infuriates me that it represents unpaid labour.

Toby Walsh: Although on a positive note, there will be very soon enterprise-level products so your company will have their own large language model that sits on your own server that you can type your own data into, so that problem will be solved very quickly.
Audience Member: Thank you so much. It's been such a fascinating session, and I feel like a whole day could have been dedicated to it. But I was wondering, what's your advice to parents, principals, university deans about the use of ChatGPT? Because I know that some schools say “Yes, you can use it”, and others say “Absolutely not”. I mean, it's something that I don't know whether you can ban really, it's naive in my mind. 

Tracey Spicer: I was at a school the other night, and one of the teachers said to me, she'd been using it for six months for her lesson plans and hadn't realised the bias inherent within it. 
So my advice is never to ban anything 100% because we simply can't, it's inevitable and we need to learn to live with it. Machines and humans need to learn to work together, and we need to master it rather than letting it master us. In the case of ChatGPT, I would highly recommend that people in schools, parents, students learn as much about it as possible and do play with it and try to teach it to be better and keep up with it because I just don't think we can put too many guardrails on things like that. Obviously, we need to learn, particularly teachers, how to work out which students have used it to write an essay, and the only way to do that is to become more educated about it. 

Toby Walsh: I'm reminded again, history rhyming of the debate that we had when calculators came out and it was very similar in many respects and of course, calculators ultimately worked, right? If you're in high school or university, you get to use a calculator, you know, I've got a calculator on my watch, I've got a calculator on my phone, you've always got decades with you, but the important thing is still in primary school, you don't have a calculator, you get taught how to do arithmetic because that's an important skill to understand numbers and how arithmetic works. 

And similarly, I think we're going to perhaps end up in the same point, which is that there are important skills that you won't learn if you just use these tools, about critical thinking, about expressing yourself, about being able to construct an argument and that we may still insist people use pencil and paper without access to these tools so that they develop muscle with these particular skills. And then, of course, once you've mastered that, once you've learned that, these are great tools to amplify your productivity. They're going to be part of our lives. 

I'm never writing another business letter. I'm never writing another annual performance plan, ChatGPT does that for me, right? And that was a total waste of my time to do that, right? Those were very formulaic, ChatGPT’s got the formula, I can personalise it to what I want so that's a great benefit in that. Similarly, they can write lots of code, there's lots of really useful things they could do and they're going to be everywhere in our lives. They're going to be really useful amplifiers of us but equally, we must remember, like with calculators and arithmetic, we need the basic skills.

Audience Member: Hi, thank you very much. Great panel and awesome suit. 

Tracey Spicer: Thank you. 

Audience Member: Feminists rock. So, I want to say I teach critical thinking in a university, and already, every essay submitted is run through a plagiarism test electronically so this will happen with ChatGPT, the university system will be working through it now. But I want to say, because I do teach critical thinking around gender, class, and race, I want to suggest that 50 years ago when you were talking Toby that there was no equality. 

It certainly wasn’t equality 50 years ago where we might see a more explosive differential now, I want to suggest a colonisation, that already an explosion of people with disabilities who are still not able to access so many things, women like me who come from more non-accessible backgrounds, so what will be really interesting with this is that there will be failures. We will watch failures, and there will be an education in this technology. Thank you.

Toby Walsh: I certainly wasn't suggesting that many of those challenging issues around equality and so on were better 50 years ago. The only point I was trying to make was I thought back then, I remember us being more optimistic about the role that technology was going to play. And I think we're perhaps less optimistic now about the role that technology is going to play.

Erik Jensen: And Tracy, that's really the driving point of your book, is that if we're not conscious of bias and inequality in technology, it'll only worsen. 

Tracey Spicer: That's exactly right. We can't just sit back and be passive about this. We have to be active. Thank you so much for that comment, when you were talking about people in marginalised communities and those living with disabilities, one thing that really struck me researching the book was smart home technology. I don't know whether many of you have that in the home, and when I started researching the book, I thought, "Oh no, I just want to dumb the home. I don't want to be hacked". But living with a dynamic disability last year, when even getting up to turn on or off a light switch was difficult, I thought there's such a role for technology in smart homes for people with disabilities, only if they're included from the outset in the design process, because then it will help the people it's supposed to help the most.

Erik Jensen: On that note, Toby, Tracey. Thank you so much. Thank you everyone for coming. 

UNSW Centre for Ideas: Thanks for listening. This event was presented by the UNSW Centre for Ideas and Sydney Writers’ Festival. For more information, visit centreforideas.com and don’t forget to subscribe wherever you get your podcasts. 
 

Speakers
Tracey

Tracey Spicer

Tracey Spicer AM is a multiple Walkley Award winning author, journalist and broadcaster who has anchored national programs for ABC TV and radio, Network Ten and Sky News. The inaugural national convenor of Women in Media, in 2019 Tracey was named the NSW Premier’s Woman of the Year, and accepted the Sydney Peace Prize alongside Tarana Burke for the Me Too movement. In 2018, Tracey was chosen as one of the Australian Financial Review’s 100 Women of Influence, winning the Social Enterprise and Not-For-Profit category.

Toby Walsh

Toby Walsh

Toby Walsh is Chief Scientist of UNSW.AI, UNSW Sydney’s new AI Institute. He is a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN and to heads of state, parliamentary bodies, company boards and many others on this topic. This advocacy has led to him being "banned indefinitely" from Russia. He is a Fellow of the Australia Academy of Science and was named on the international "Who's Who in AI" list of influencers. He has written four books on AI for a general audience, the most recent is Faking It! Artificial Intelligence in a Human World.

Erik Jensen

Erik Jensen

Erik Jensen is the award-winning author of On Kate Jennings; Acute Misfortune, which was developed into a film; and the Quarterly Essay The Prosperity Gospel. He is founding editor of The Saturday Paper and editor-in-chief of Schwartz Media. His latest book is his debut collection of poems I said the sea was folded: Love poems.

For first access to upcoming events and new ideas

Explore past events