★★★
Inter
Act
★★★
with
Sue Turner OBE
Emma:
I thought, this morning, if you were designing a suit (for example) that’s been done with machinery for years, then there’s no point pretending that people still pin it. Well, some people do, but not everyone still pins everything manually, etc.. But it’s more in the kind of music/words area where people do feel that they have created something that’s exchanged - from them to other people; and that main focus is in these questions.
And the processes that they maybe have to go through, to get their expressive creation to a different state, makes it a medium of a different quality. And that’s the thing that people are tussling to understand. So what probably I feel people would want to read is some sort of reassurance and some kind of clues as to how they might work around things, to avoid feeling like they’re surrendering themselves - I think those are the big worries.
My first question therefore relates to the fact that a substantial proportion of creatives do see AI as an existential threat to their already poorly supported and often crucially valuable work. In a press article just this week (March/April), someone described it as theft. How can governance slow the speed of such change in the creative industries and how are any measures to be devised and implemented?
Sue:
I think if you were talking to somebody who’s a tech bro, somebody who’s hoping to or actually already is making lots of money out of AI technology, then they would use analogies in this situation to, say, lace makers in the 15th. century. A sort of wonderful craft, absolutely amazing things that they could do with their hands that even today’s most advanced machines can find very difficult. And they would say, “But if we want most people to have access to lace, then we’ve got to use machines to do it.” And the original lace makers may have felt very disenfranchized by the machines. But hey, tough. “It’s for the greater good.” So that would be the tech bro response: if change happens, progress is good and therefore don’t stand in the way of it. “Don’t be the Luddite. Get with using the tools and just accept that life changes.” And so that’s very much looking at a sort of societal good picture and putting a particular spin on it, which we may not agree with.
Emma:
That’s helpful. What comes to mind straight away is the fact that in the creative process, some people are the entire supply chain - they are the originator of the design, the implementer of the tool and then the manufacturer. So I think for those people who perhaps make the pattern, they’re the ones who feel slightly more threatened than ones who perhaps just buy a pattern and follow it, if you like. They’re happy to use any machine that makes life easier. Of course, they’d want to do that.
Sue:
Yeah, absolutely. So again, your Honiton lace maker was somebody that created a particular way of making lace and they had their own very specific, very particular patterns. You know, I don’t agree with this, but I’m just putting that…
Emma:
No, it’s fine. It’s a good example.
Sue:
What we can say is that there is still lace being made by hand in different parts of the world and it’s still very beautiful. But there’s also the possibility of accessibility to the masses by creating things that perhaps aren’t as beautiful or are much more standard, much more formatted, much more processed. But it does mean things are accessible to more people. I don’t think they’re saying that there’s no rôle for creativity in the future. But there becomes that distinction between the very niche (and perhaps, therefore, needing to be very expensive), versus the mass market, very cheap. So you can say, putting their argument for them, that it’s democratizing access to all sorts of creativity and all sorts of creative outputs because things become cheaper, they become more accessible. We can all get our hands on it.
So, for example, I don’t compose music, but I can use tools like ‘Soundraw’ to create music using artificial intelligence. Now, the AI has been trained on lots and lots of music. So if I say I want a five minute piece to go behind a video that I’m creating and the video needs to be sort of dark and threatening, or uplifting and optimistic, or whatever moods that I choose, the AI has worked out the patterns from lots and lots of other music, so that I can then create things and I can put different instruments in and can have a more melodic passage and a more sort of spiky passage. And so I can, in a very amateurish sort of way, guided by the machine, become a composer. And then that product is potentially my copyright. I’ll come on to that one later, but potentially it’s my copyright, so that then I’ve got that music behind my video and so on. Does that make me a composer of high quality? No. Does it mean that I can get my hands on something that otherwise I wouldn’t be able to because I’d never be able to pay a professional to write the music and record the music for me? Yes. So it’s that element of democratizing creativity and letting more people like me get their hands on some tools and do something which is probably quite average.
Emma:
I completely agree with a lot of that. I think the democratization is a very, very key part. And I’m amused at the concept of niche and expensive because at the end of this, you’d then imagine that there’d be a whole line of people for whom people were prepared to pay an awful lot of money. And what’s happening is that people are prepared to pay no money at all now because “I can get it for free.”
And it’s interesting, actually, what you said about lace making. I’m sure this is the same argument. We don’t evolve much, it would appear, as humans at all. So the same conversations are had around different moments, at different points of societal evolution. It happens to be music now and it was lace then, I get that. What comes with the advancement is an omission of honouring or giving deference to the process that other people have previously gone through in developing their products or services. So at the end of this democratizing, hey, presto, we’d all have the same salary and have no worries, wouldn’t we? Or the remuneration we choose?
Sue:
Ah, well, that is what some of the tech bros want us to get to or think we’re going to get to. So they think we’re going to get to what they call artificial general intelligence. That’s a type of AI that we don’t have at the moment. It would be so good at reasoning and putting different information together that it could think on general topics, on any topic like our brains do. So, for example, if we have a self-driving car today, that’s a very complicated set of different algorithms all put together to try and drive a car like a human would. It’s using vision, it’s using all sorts of inputs, and it’s trying to guess what the next best thing to do is. So very, very complicated, but it can’t write poetry! So it’s still narrow artificial intelligence - it’s just set up to figure out how to drive a car. It can’t then go off and design rockets, or whatever else. It has its narrow purpose. So at the moment we have narrow artificial intelligence. We don’t have general artificial intelligence, but that is the aspiration of lots of people in this tech world that we get to something that is as good as or better than the human brain, which then potentially, according to some prominent speakers on these topics, means that we’d no longer need work because machines can do pretty much everything that humans currently do in the workplace.
Emma:
There’s a whole bunch of problems there.
Emma:
As with most genies, once out of the lamp, they seldom return. Obviously, helping people with life-limiting conditions, e.g. those with few or lost powers of communication, is a fabulous thing. AI here opens doors which can be wonderful. Can you talk a little about such advances?
Sue:
Sure.
Emma:
General human nature is that if something has a good outcome, we all go “Hurrah”. And if it doesn’t go well, we all go, “Who can we blame?”. So if you’re in a desperate situation, any intervention that helps is going to be wonderful. And lots of these medical innovations are amazing. For example, I saw a gentleman in the Midlands who’s just had a speech tool provided. And it’s mind blowing. He can now enjoy his life in a completely different way. So that’s fabulous.
No-one would say we don’t want those sorts of things, that goes without saying, really. I mean, my friend with ataxia had very poor communication in her latter days. So for people like her, it’s amazing, I think. Generally, would you say people create these things and then sell them to health services and then they’re used? Or do they have to pitch, for people to try them out? Or is it a whole range of how they get into circulation?
Sue:
Yeah, it’s a whole range. So the health area is really tricky. I’ve known lots of people who are innovators and trying to do really creative things, using technology for the health sector. If they’re not already in the health sector, getting your product to be tested is incredibly difficult. The NHS is just this big jelly(!) and you sort of prod one area and they say, “oh, it’s not me”. And they send you somewhere else and you can spend years trying to get your product to be used.
But I have seen success. I used to be Chair of an organization called the Faculty of Clinical Informatics, which sounds very grand. Clinical informatics is the area where people in health and social care who care deeply about data and technology put the two together for good purposes. Clinical informaticians typically would be employed as chief information and security officers in a health service, or other rôles like that. So they’ve got a good standing in the organization already. And they can then be working with doctors, with nurses, with people in any aspect of health care, social services as well, in order to come up with ideas to test things because they’re already in the sector. In that rôle, I saw all sorts of technology where people were coming up with great ideas, testing things out, getting traction and moving them forward. And that was great.
Just one little example isn’t to do with voice so much, but it is sound. This was where the University of Bath and some health sector researchers did a pilot of putting just a simple ‘Alexa’ device in people’s homes. And this was the homes of elderly people, people with mobility problems. And the theory was that you could use the sound of people moving around their homes to train a model, to understand what was typical for that particular person. And then you could detect when the sound of them moving around their home changed, which might be an indicator that they were more likely to have a fall - which would trigger interventions to try to prevent them falling, like installing more grab rails or more frequent visits by carers. And it was really successful. So, you know, a simple little device, very low cost, training it to understand that sound changes, interventions happen, people are less likely to have a fall. That’s just a really lovely example. So there, it was coming from the point of view of saying “How can we do some real good with this technology?” For me, it’s always that mindset. Where is the person coming from? What is it they’re trying to achieve?
Compare and contrast that with some news that was just announced this week. A new AI model has been launched that tries to unify AI models that understand our speech with AI models that can generate speech i.e. putting it all into one model. Potentially, I guess, it will produce a video or other sound which sounds exactly like me, not just kind of like me, but really, really like me. Not just in the words, but the tone of voice, the kind of nuances of how I would say things, as well. That sounds massive, from the point of view of my being able very quickly to produce a training video, for example. I can do that already with some tools. I use ‘Synthesia’ quite a lot and a few others, which I can use very quickly. But this sounds like it’s going to give this area rocket fuel.
But I immediately go, hang on a minute, thinking about responsible AI, about the seven pillars of responsible AI, and one of them is privacy. Some tech giants have huge amounts of data on all of us. So are they protecting our data? Has this model been trained on a diverse group of people? Will the model work as well for people in the Global South, thinking about diversity and fairness, if it wasn’t trained on their voices so much? And what about languages other than English? Is it really biased towards English speakers? Because the whole of the world, with the way that we’re treating technology and the data at the moment, is really becoming very biased towards native English speakers.
Emma:
Absolutely.
Sue:
So you immediately go, wow, great, but hang on a minute, possibly terrible!
Emma:
How on earth can copyright be defined in settings where AI is used several times for different tasks?
Sue:
There’s a huge legal wrangling that’s going on around the world to try and define where ownership lies, for example, if we are using AI tools. And they’re trying, in various different legal areas, to create some sort of definition that distinguishes AI that is being used lots and lots of times, directed by humans, to write a poem, to create art, whatever. And so not such a measure of how much is the tool is used, but how much does the human input into the whole process.
So you might use a tool a huge amount, like a writer generating plot ideas, drafting dialogue, checking for the continuity errors and so on. If the writer then hugely rewrites it, selects things, perhaps changes the order of things, then the law is trying to tend towards saying, well, that’s human controlled and therefore can be copyrighted. But if they’re simply stitching together large chunks of output from AI, then maybe that’s not enough human input to say that we can have that as copyrighted material. And you see it all the time at the moment in books, for example, you go onto an online bookstore and you order something and you go, “oh, right, this is great; this is some new book on a topic that I’m really interested in”. And it comes through and you can clearly see that it’s self-published, for one thing (which isn’t necessarily a bad thing...and again, represents democratization to get their words and ideas out there) but then the second thing is it’s not their words and ideas and you can clearly see that it’s been AI-generated e.g. there’s a load of repetition.
I’m not against things being generated, but they need to be humanly edited, in order to be actually coherent and interesting and stimulating. So, yeah, the law at the moment is really wrestling with what it is that can be copyrighted. Currently, they’re saying there’s got to be human authorship for something to be copyright.
Emma:
Can I just ask, when they’re doing the observational assessment of what has been created by a person and what by a machine, is it a machine or a person doing that assessment?
Sue:
Yeah. So at the moment, there are some tools which purport to be able to tell the difference between machine-generated and human-generated words. I’ve not seen it used with music or other formats, but certainly in words. And I’m just about to start doing some work with the University of Bristol. And there, it’s a really interesting challenge for educators because they’ve had this habit of saying, “oh, we’re going to ban our students from using tools like ‘ChatGPT’”. And they put it in the same box as plagiarism. Well, is it or isn’t it? And actually putting it in the same box isn’t very helpful because the plagiarism detectors that they’ve been used to using for years have been pretty good at detecting plagiarism. But research has shown they’re pretty rubbish at detecting when AI is being used. So they’ll look for certain keywords and phrases, but those keywords and phrases aren’t necessarily good indicators that AI has been used. They could indicate other things as well.
And my problem with all this, trying to detect AI use in the education sphere, is why are we trying to detect it? Why are we trying to ban it? Actually, what we should be doing is, first of all, encouraging our students to understand what these tools do so that they can use them when they go out into the world, wisely. So they understand the flaws, they understand the problems, so they can use them with wisdom and integrity. And secondly, we need to be able to change the way we do assessments. Rather than just saying, “I’m a professor, I need to know whether my students have used this tool or not”, as I mark their essay. Well, let’s move away from having essays. Let’s have a different way of doing assessments.
There are some guys over in America who are doing this really well, where they all say to their students, “I want you to use a generative AI tool in analyzing the following information and give me your critique of how the AI tool worked.” We’re teaching them how to use it, we’re expecting them to use it, and we’re doing the assessment on a different basis. So for me, a lot of it is about laziness. If we say we don’t want to change the way we do assessments in education, therefore we’re going to ban these tools because they make our assessments messed up. No, change the way you do the assessments. So that’s my hobby horse when I get into the university.
Emma:
That’s really interesting. And I think also it throws up the whole democratization of your educational journey, whatever you choose that to be, in that you don’t really need another human, therefore, to have an opinion on what you think. I remember at school asking my Greek teacher “When do we get to write what we think, rather than reviewing other people’s critiques as secondary sources?” She said, “Oh, it’s about your third degree.” And so I just think, okay, that’s a really strange culture that we’ve evolved, that some person somewhere wrote a book that someone else thought was great, slapped them on the back, printed it for them, and it got put in schools. And then we all say, this is the accepted view of something. And now we challenge that. Or maybe.
So my big provocation is that this debate is probably more to serve the really organic creatives who can’t help ‘producing stuff’. They want the process. They want to experience the trials of the creative process, however that is. You know, people who perhaps devise on stage or people who, almost in a therapeutic way, create stuff that’s uncomfortable, but shows them producing it and they want their whole process to be seen. For them, the AI land is a complete horror show. And that’s what I’m hearing from people that write and colleagues that create slightly ‘out of the usual’ kinds of presentations. They’ve worked out its relevance and it’s largely not for them. Yeah, it can create a light show that they might dance in front of, or something. And they love that. But the concept of its dominating their creative thinking and occupying their mental space is just outside of what they can inhabit. And that’s, I think, what’s creating a lot of uncomfortable ripples, really, at the moment.
Sue:
Yeah, I get that. When I was the Chief Exec at the Community Foundation, some of our grants went to various mental health organizations. They might be charities, they might be community groups, it could be all sorts of different formats. And I remember being invited to go along to one group who had been supported by grants from the Foundation. And the leader of the group was just doing a gig on one of the boats floating in Bristol Harbour. I think it’s called the Grain Barge, so it’s all about beer and so on. And they have a bit of a music space. So he said, “Oh, come along, come along, see the latest gig.” And he said, “Oh, you might find the music a bit uncomfortable.” And so it was - it was really dealing with mental health issues and lots of the lyrics were pretty tough to listen to. But his therapy in creating that music was phenomenal. Now, did he make money out of doing it? No. He scratched a living really by running this mental health charity and helping people through music. So does AI take away from that? No, that could still carry on. And maybe in that future with artificial general intelligence, if we ever get there, he’ll have more time to do that. And he won’t have to worry about money and earning a living and paying the rent because society will have taken care of that.
Emma:
That’s interesting. Yes, I don’t know.
Sue:
So does it free people to be more experimental and to use whichever tools they want, if they no longer have to earn a living from doing it? So maybe that’s the flip side of it.
Emma:
There has been a huge reaction to the proposed changes of copyright law and several creators’ unions have put up excellent defence of the existing frameworks. Artists yet again feel as if law-shapers have little interest in or understanding of what they do. How can we encourage administrators to respect and understand creatives better?
Sue:
There is this sense of trying to convince large chunks of society that the creative process, as well as creative output, is useful to society and should be valued. Also, how do we get those in power, those who make decisions - whether that’s about giving out grant funding or tax breaks or whatever else it might be - how do we get them to respect and understand the creatives? Having spent a large chunk of my career in and around the government relations, the lobbying world, it’s very difficult if we’re trying to get the people who are ‘the target’, as it were - the legislators, perhaps the government and the civil servants - if we try to get them to come to our side, it’s very difficult. It very rarely works. What works in lobbying is when we go to their side. So at the moment, this government’s got a really strong focus on innovation and growth - and a thing like the purity of a creative process doesn’t resonate with them. It’s not on their agenda. If we said to them, okay, 50 years from now, let’s imagine there’s no creative process, nothing going on for the next 50 years, they would see how damaging that would be and they would say, oh, no, of course we value having a brilliant photographic sector in the UK and we value at the same time still having fine artists. So yes, there’s room for both.
Emma:
I don’t think that the creative sector is much convinced that it is valued at all by the decision-makers. From what I hear from colleagues, they are worried and they can’t see how AI relates to their lives. And as a result, they’re just going to step away and they’ll become even more excluded, which is a really sad thing to see.
Sue:
Yeah, I think with this government, the big push is all about growth, but they do still have that underlying sense of trying to create better social justice. So if somebody can tie in what they do or what a broader group of people do with making the world fairer, with some sort of elements of social justice, then that can help get it on the agenda.
Champions and networks are very good when we’re lobbying. That ‘make it fair’ campaign around copyright and AI has been really good at highlighting certain people who are very well known in the creative sector and that this is part of their mission. And then that helps people who aren’t well known as well.
But yeah, it is about us going to them and trying to make what we do appeal to them and seem relevant to them. And there is some hope there. There’s an outfit which is funded by the government called Innovate UK. And they have a programme called ‘Bridge AI’, which gives grants to help organizations figure out how to use AI, doing research and experiments and so on. They picked out four sectors that they’ll fund and one of those sectors is the creative sector. So there is some identification there that the creative sector is a really valuable asset to the UK. They’re probably thinking more about Pinewood Studios or something, but it’s in there. And therefore, there’s some funding to help that sector figure out how to get to grips with some of these tools and what they might use them for.
So, yeah, it’s very tricky. And I guess I always start when I’m lobbying, rather than saying “It’s them and us”, just saying, “Well, how can we have common cause and how can what I’m arguing for be part of the solution that they want?” But yeah, it’s not easy when you’re miles apart.
Emma:
And I think creating a road where you can both walk alongside is indeed a starting point, but that’s usually never done. I know that some invitations to MPs to create more awareness of the sector have not been followed up. So from that viewpoint, there’s a sense that things are happening and they’re going to happen anyway - and you can either jump on or jump off. Colleagues have expressed to me that they feel very affronted that it’s being imposed; they didn’t ask for it and they can’t see how to join the party.
In the lockdown, we were in an emergency where left, right and centre, some colleagues were actually not able to buy food. And I observed people suffering mental health like I have never before witnessed, in professions like mine. In an emergency state, they have to wave a flag. And when they wave a flag and no one comes, they then lose faith. So I think that’s why the reaction possibly is even stronger now. It’s like it hasn’t yet been set straight from what was taken away, what was previously working really well, and now the rules are changing. People are not in a place to be able to hear it, unfortunately.
Sue:
I guess on that one, I would just say if people can take a step back and first of all, provide to the MP (or whoever it might be, local authority person, whatever) some information, something that supports the position that they already have, something they want to hear, then you can go in later with the hard stuff that they don’t want to hear. From when I’ve been lobbying and I’ve got MPs in, whether I was at Bristol Port or at the Community Foundation, I would say that it is about building the network and the relationship first.
Emma:
Of course, of course. But for singers, the concern is that AI removes the need for the live interaction. And I think that’s another huge concern with people. From a small recital to a large venue, there is still a place for people to feel they’re at an event where the physical body and bones of someone were in the same space. Some people are already paying money to watch something that’s not there and, if that is going to be the future way of consuming creative content, then there is no place for live performance anymore.
Sue:
Yeah, yeah, yeah. I’m not convinced we’re going to be getting, in the next 20 years, to a place where there’s no place for the live performance. But I do worry that we’re going to make live events something that is for the elite - and that the masses get the thing where you just put on a headset and it’s all synthetic and you never even go in a room with other people. So that’s my worry.
And it’s the same with coaching, actually. There are various coaching tools that are out there now where the sales pitch from the companies is that this is democratizing coaching - because I can have a little pin on my shirt that is listening to me all the time and it’s giving me a bit of feedback to say, “Oh, when you were talking to your husband today, you weren’t very kind when you said something or other” and do better, basically. So it’s constantly nagging me to do better. So that’s one element of it. Another element is an executive coaching. There’s a big market for executive coaches. Some people get to a certain stage in their career and they decide they want to be an executive coach. So there are people making money out of that, out of training the coaches, I mean. And there are tools, supposedly, that can do that sort of rôle. And what I worry is that we’re going to have, on the one hand, democratizing it and making this sort of coaching available for everybody at a very low price but, on the other hand, saying “Oh, if you want the real human to human contact, that’s only for the elite. You’re going to have to have a lot of money if you want to have that.”
Emma:
That has to happen, for people to still have a living, doesn’t it? So for every area, this is the bit that you can buy and have a little part of. And everyone’s welcome to have that. And if you want the real deal, you’ve still got to go to this. So I don’t know if that economic status ever really changes.
Sue:
Yeah, yeah. And I guess it depends on the quality, doesn’t it? So if we think about coaching business leaders, is the quality of the output that they’re getting from the AI-backed system good? Or is it really substandard? They may be thinking they’re getting good coaching, which is only one step down from the expensive stuff you pay for. But actually, perhaps it’s pretty average and not really worth having. We don’t know. Time and testing hopefully will show us.
Emma:
Really interesting.
Emma:
Where does liability lie in say medical situations where a procedure is carried out by both human-led and AI-framed processes? The medical liability is quite tricky. But say something is conducted, a procedure is conducted, and there is someone enacting it, but they’re guided by input from AI, or they use tools that have AI as part of how they operate. How is the liability then discerned?
Sue:
Right. The EU was going to do some really helpful work on this. They were going to put together a Liability Act. But in the advent of new arrangements there, they’ve stopped all work on it. So it’s just been canned this year, which is really interesting. We’ll see. So what we’ve got at the moment is just the general principles for doctors, healthcare professionals, hospitals - they have a duty of care towards their patients. If they rely on AI unreasonably, or they perhaps incorrectly overrode the decision that an AI tool had offered, or just failed to understand what the limits were on an AI tool, those are all areas where the existing laws would cover it.
In other words, we don’t have to treat AI separately. It’s quite interesting; AI tools in the healthcare sphere are more often than not treated under the same product liability and medical regulation frameworks that we already have i.e. medical devices regulation governs it. So in fact, healthcare in some ways is much clearer than other areas of society, where it’s not clear where AI fits in.
Emma:
Okay, right. Brilliant. So in fact, that’d be a good place to start if they’re looking at other things, to look at how the regulation’s been constructed and where the duty of care may fall down or the overreach would happen.
Sue:
Safety, quality standards, testing, all those things are really standard in healthcare. So don’t just release a tool because, hey, this might be fun, you’ve got to go through a lot more testing first.
Emma:
If we lose the need to do things ourselves, is that progress or laziness?
Sue:
So you say, “Okay, how do you earn a living? What does society look like in a world without work?”
Emma:
Bored!
Sue:
Yeah, well, maybe or maybe not. There’s a book by E.M. Forster called ‘The Machine Stops’.
Emma:
I haven’t read it, actually.
Sue:
It’s really short. And it’s a fascinating read for anybody who is interested in this area, because it was written, I think, in 1909, before the First World War. And it foresees the iPad. It foresees touchscreens. It foresees Zoom meetings. It’s just immensely perspicacious about technology and different ways of communicating. And it foresees so many different things in a world where there is no work so people find other ways to fill their time. It’s very much dystopian because it ends with the machine deciding to just stop giving air to the humans and food and all the rest of it. So they die, basically, apart from a few who’ve escaped the machine. So kind of foreseeing this world of control and domination.
But potentially, there is a world in which, whereas we’ve built our lives around employment for the last, whatever, 300, 400 years, we go back to a world where we’re actually spending our time not just trying to subsist and live, but to fulfill ourselves in other ways. So some modern day leaders have this kind of utopian vision, but they never answer the question about, okay, how do I actually get food? What transaction takes place so that I can get food? If at the moment I’ve got this transaction of my labour giving me income, I then hand over some of my money to a shop and I get food. What is the world without work going to look like? They haven’t answered that question.
Emma:
That’s really interesting. especially what you said about where to get our food from. The subsistence thing, where craft came out of the need, probably had some interesting moments. And I still feel that people still go back to those things when they’re feeling they want to find something that they can’t quantify but that’s missing. It’s like I need to go and make a thing, or touch some soil, or grow a veg, or whatever. So that is not just because I’m starving - I mean, it’s more important if you’re starving - but there’s something in us that does need to have a basic purpose, not even a fancy, philosophical purpose. And I think the balance between that and value and money and reward is very, very hard to quantify. I’m intrigued as to how it could change.
Sue:
Yeah. And so we may get into a new era. Some people say artificial general intelligence is two years away or five years away. Some say it’s 50 years away. Some say it’s never. There are lots of debates and discussions amongst the experts, the tech experts about when this might happen, if it’ll ever happen. Supposing it was five years in the future, then we are potentially in a world where we can make those choices. If I decide that going and digging a vegetable patch is just what fulfills me, I’ll be able to do that because I want to, not because I have to for food. So potentially, then, that creativity is opened up in all of us because we have so much choice about how we spend our 24 hours a day; whereas at the moment, a lot of people are in a little box that says, you know, nine to five, got to go out, got to work, got to do the following things. And we don’t really have much choice. But, you know, there are so many questions then about what would that mean for children and young people?
Emma:
I would suggest that those who are already self-employed or have been for while will fare best. That is, if they have managed to survive the last five years or so - as they live very differently from that 9 to 5 box, they already made that choice. Perhaps people will come to them first to understand how to cope with any new ‘unencumbered’ world.
Emma:
How can online communication remain meaningful and trustworthy in the shadow of manipulated images and words? What would be your main concern or fear?
Sue:
The fear is very real, and it potentially leads to all sorts of harm. And it’s here, and it’s what’s happening now. There’s a woman called Rachel Coldicutt, who put it brilliantly. She said, we need to make sure that AI benefits “8 billion people, not (just) 8 billionaires”. And that’s it in a nutshell for me, that the tech bros and the leading politicians make all the decisions about what is good for us. And we just become the takers of the technology and very passive rather than having human agency. So that erosion of human agency is what really worries me. And that’s my motivation for what I do, to try and give people knowledge and understanding so that they can keep control of some agency.
Emma:
That’s really interesting. Progress or laziness, that's an eternal debate.
Emma:
How much do you use AI personally and/or for your work? I mean, you obviously use it for work, your work is AI. And I’m sure you have many tools that you use to make life quicker and easier. You’ve mentioned some of them already. What percentage of your - what word should we use? - exchange, shall we say, in an average week, month, day or whatever, is interactively organic? And what percentage of it is processing through streams which then can present you to the outer world?
Sue:
Yeah, I guess I haven’t thought about percentages. I do it on the basis of what do I enjoy doing and I still do those in the human way, because I’m enjoying myself. And the things that I don’t enjoy doing, I automate away as much as possible. So a classic example would be, I enjoy pretending that I can do things with graphics. So there’s a tool called ‘Canva’. I don’t know if you experimented with it at all.
Emma:
I have heard of it, but I haven’t used it.
Sue:
Yeah, it’s really lovely if you want to create presentations, proposals, a LinkedIn carousel, anything like that. It just creates them really easily. And it’s fun. And I enjoy using it. So there is some AI sitting behind it, but actually lots of things that aren’t AI, they’re just simple use of graphics. So I play with ‘Canva’ a lot. If I’ve got to do a proposal for something, I make it bearable because I don’t like doing proposals. I make it bearable by using ‘Canva’ because it’s more fun. So I guess I’m using the tools to make life fun.
And then there are other things that I say I automate away. So I hate booking appointments, that whole sort of, “Are you free at two o’clock next Tuesday? No, I’m only free at 1.15”. So I don’t do that. I have a tool which is AI based, which basically organizes all my tasks I have to do. And then as soon as somebody puts in a webinar, a meeting, whatever it is, it’ll then reshuffle all the tasks. So I still get the important stuff done, but I haven’t had to sit there and go, well, I’m going to have to do this on Tuesday, not Monday now. So I love tools like that.
Emma:
How much do you use your voice for your work and do you have any regimes to help look after it?
Sue:
Okay, I probably ought to. I do a lot of webinars, in-person presentations, workshops, keynote speeches, and then, of course, teaching. Some of the sessions are executive education sessions and the longest I do would be four hours online, which is quite intense.
Emma:
That’s a lot.
Sue:
Yeah. So one on Monday, for example, that was four hours from two till six. Yeah, there’s so much content to impart that the danger is that you fall into just your talking all the time. But of course, this is about other people sharing their ideas as well. So that gives me some times when I can just have a break and just be letting other people go with the flow and talking, or making some notes, doing a bit of case study analysis, that type of thing. So I try my best to give myself and my voice some breaks.
But I am very, very aware that whether it’s winter and it’s cold or it’s spring and it’s hay fever and so on, if I get the post-nasal drip going and I start coughing, then I’m not going to stop for five minutes. And you can’t really just say, you know, I’m not going to speak for five minutes! I’m aware that it’s not a great regime and a great setup and need to do more.
Emma:
Do you still sing?
Sue:
Only in the car.
Emma:
Doesn’t mean it’s not worth hearing. It might be great! I’ll have to listen sometime, get in your car and have a listen!
Emma:
Anything else you want to say?
Sue:
I guess the key phrase that I always use with any group I’m talking to about AI is “just because we can doesn’t mean we should”. There are huge things that we might do and there’s very little regulation to tell us what we should and shouldn’t do. So it’s up to all of us to have that phrase in our minds, to make ethical choices about whether and how to use AI. And behind all that, of course, it’s education. If we don’t understand these tools well enough, then we can’t make choices, whether they’re wise or not. So becoming AI literate, no matter where you might stand on the spectrum of thinking these tools are the devil’s work, or they’re the best thing since sliced bread, we’ve got to become AI literate to make those judgments.
Emma:
Yeah, and I would say also that we need to become literate full stop, because whole pockets of the country still can’t write or add up. There’s a long way to go - there are gaps everywhere. It feels to me as if the speed of the ball rolling is an exciting one and never really seen, I think, before, in the way that things are evolving and changing. And that’s really, really good. But as you say, it needs supervision, observation and some sort of context. And people need to understand what they’re doing. And I think at the moment, because of that speed, they feel very disenfranchized, as they don’t really know the extent of what’s happening. That’s crucial, I think. I don’t know if it can be brought in as a key subject in education - but I’m sure it could be.
Sue:
Yeah, I mean, it’s a frustration that any UK governments (of any era) put out some sort of pronouncement on AI and technology and so on: there’s never much in there about AI literacy, there’s just a general thing about putting some funding into PhDs, or telling teachers to do something in the classroom. There’s not enough funding behind it. Therefore, the vast majority of people still don’t understand what these tools are, or how they work. Going back to that human agency point, people can’t take control, they can’t opt out, for example, of how these tools are used, because they don’t understand them well enough. So yeah, that’s the big drum that I beat.
Emma:
Anything else you want to bang a drum for?
Sue:
No, really, really great to have the opportunity to air grievances and put different points of view as well.
Emma:
I think my sort of readership is a lot of people to do with voice, obviously, primarily, but not only that but voice from many, many angles. So the interaction aspect is quite key. The liability aspect is key. And the agency and ownership are really, really key for people. And I think, as you say, with everything, when it’s not understood, that’s when people are fearful. And the fear that’s being expressed to me from all kinds of corners from colleagues in all sorts of areas - perhaps people are already using bits of AI, not understanding that they are, and it’s actually helping them. And also the fact that they feel they have no voice to express questions about things they don’t know - that’s the biggest issue, I think. So this interview, I hope, will do quite interesting things. Thank you. I’ll go and maybe put some special machine onto it, to disseminate it as quickly as possible. Or maybe even hand write and listen myself. That would be exciting for a dull afternoon.
Sue:
There is actually a website called ‘There’s an AI for that’.
Emma:
Of course there is!
Fabulous. Okay, well, that’s kind of covered everything that was on my list of questions. Thanks so much, Sue. It’s been brilliant. Really good.
Sue:
Yeah, and good fun. And we’ll keep talking.
Emma:
Absolutely. All right, take care. Lots of love. Bye.
Sue:
Bye-bye.
It has been wonderful to have Sue's take on such a current and divisive topic and I am very grateful to her for exchanging her ideas and thoughts with mine, in contexts pertinent to each of us.
There is a great deal to consider in the developments which AI brings - and I hope this interview will spark ideas in its readers.
17th. June, 2025