911

XClose

UCL Minds

Home
Menu

WWWAI - Episode 5

In this episode, host Professor Rose Luckin (UCL Knowledge Lab, UCL IOE) speaks to Professor Jack Stilgoe (UCL Department of Science & Technology Studies, Maths & Physical Sciences) and Dr Molly Morgan Jones (British Academy) to explore what kinds of research are most needed in the AI space - and how we should go about filling the gaps.

Episode 5- Transcript

Rose Luckin
Hello, and welcome to the UCL and British Academy podcast series: Working Well with A I. I'm Rose Lucin, Professor of learner centered design at the UCL knowledge lab. In this podcast series, we're exploring how artificial intelligence, AI, is changing the world of work. AI has long been predicted to reshape our working lives, and it has developed in leaps and bounds over the past decade. And as we emerge from a global pandemic, we're rethinking how we work, what sort of work we value, and what we need for the future.
In this episode, we will be reflecting on the AI and the Future of Work Project, which has looked at issues with AI in work, including its impact on good work across sector scale, and on who can be disenfranchised as AI technologies become more prevalent in workplaces. So, this is the last podcast episode. And it's only right that we do that reflection. And we'll be discussing the kinds of research that are most needed in this space, and how we should be going about filling these gaps. So it's very much trying to look across what we've discussed in the previous episodes, pull it together, and then look to see what else needs to be done.
And I'm delighted to say that I have Dr. Jack Stilgoe, from UCL and Dr. Molly Morgan Jones from the British Academy here with me today. So Molly, could you kick us off, please, by telling us what interests you most about a AI in the workplace?

Molly Morgan Jones
Of course, and it's such a delight to be here today to talk about this issue. So I'm the Director of Policy at the British Academy. And I think my interest in AI and work goes back to one of my core interests in the area of AI overall. So, I think on the surface, it's an area that's easy to think about just this sort of new technology, new innovation. Artificial intelligence sounds quite techie. But there's just so much interdisciplinary thinking and research, which needs to feed in to make sure we maximize and harness the opportunities of AI in society, but do so responsibly, equitably, inclusively. So my own background in science and technology studies is quite interdisciplinary. And as we'll come to talk about over the course of the episode, the disciplines that the British Academy represents across the humanities and social sciences can feed in in so many unexpected, yet really wonderful ways to this topic.

Rose Luckin
I couldn't agree more. And I think it's something that that open often gets overlooked. We think of AI as being just about the technical aspects, and it's so… It's not! So I think it's a really important point that that you make there Molly.
Jack, what is it about AI in work that you found most interesting?

Jack Stillgoe
So I guess the sort of, the sort of gateway drug for debates about AI is often the sort of the Sci Fi idea of, you know, a robot, doing everything a human can do, and more besides. and I think like many people, I sort of share the, the sense of excitement and threat that that comes with it. And, you know, when you when you think about that, in the context of work, you think what, you know, are humans under attack from their, from their, their robot creations. And there's a sort of Frankenstein aspect to it.!
But the more that I've got involved in the, in the AI debate, the more I realised that actually, you know, the Sci Fi story is a bit of a distraction. And actually, what we have here is a technology that, yes, it's, it's overhyped, but it's a technology that promises to deliver enormous power. And as a social scientist, my question is, well, who will have that power? How can we ensure that with that power comes responsibility. That, you know, in the context of work and the labor force, which means a huge amount to people in ways that go far beyond just, just giving them money in exchange for their for their time and their, and their labor? You know, how can we ensure that we keep a sense of what good work is, and how does that sense of good work playing to the technologies that we create?
So there's a much longer history of automation and work that goes back, you know, to the industrial revolution, that I think we should, we should be aware of that. There's nothing particularly new about AI as it applies here. But maybe there's something about the speed with which AI products can potentially arrive in our lives that gives us particular challenges for policymaking and for our own understanding.

Rose Luckin
That's really interesting. I'm not sure I agree about there not being something particularly different about AI. But that's not an issue to pick up on at the moment. But it is very much that issue of power, isn't it? And people recognizing the responsibility that comes with that power and trying to make sure that people experience good work. Because it is about more than just the money that you earn at the end of the day.
Okay, so what kinds of things are researchers currently looking at? We certainly explored the research that's happening over the past few months. But as we reflecting today, on some of the things that we found out through the AI and the Future of Work Project, especially when it comes to this existing body of research, I'd love to ask you, Jack, if you could give me a brief tour of the AI and work research landscape, as you see it, please.

Jack Stillgoe
Yeah thanks Rose. So I, so one starting point might be studies like… T here was a very prominent study that came out a few years ago, that aimed to assess which jobs were most at threat from the arrival of artificial intelligence, right? And the I guess that the headline there was… It, it wasn't just, you know, “the robots aren't just coming for working class jobs, they're also coming from middle class jobs”. Right? And that's the sort of… The pop version of the of the headline is that if you're an accountant or a lawyer, you know, artificial intelligences is coming for you. And you know I mean, it sort of maybewakes a few people up, that sort of discussion. But I think we're realizing, with the sort of full range of social research around artificial intelligence and work that the debate about AI and work is far more complicated, right? It's… So the question of whether robots are coming for our jobs is a bit of a, is a bit of a distraction. And actually, if we look at the ways in which artificial intelligence algorithms, you know, more simple algorithms or, or older forms of robotics, are getting involved in the workplace, you start to see not a replacement of human labor, but displacement in various ways.
So, humans end up doing different sorts of things, which asks different things off those humans and requires different skills from those from those humans. It's not just that, that we go for it from a position of employment to unemployment. And I guess one of the ways in which we can sort of observe that in the politically sharpest terms is around the gig economy. Right? The, the, the application of algorithmic forms of management, two types of work that have been around for a while, right, which might be types of work, like, you know, delivery of people and things to, to particular places, or logistics in an Amazon warehouse. [It] Might be the addition of algorithms to there… Provide real challenges of protecting workers rights in those in those contexts, and protecting good work in those context, right? There's that that sense that if your, if your manager is an algorithm, you can be ground down quite, quite quickly and also exposed to various forms of precarity that may be that may be worse than they would be if your if your manager was a human with, with all of the sort of empathy that we might expect of those, of those human beings.
So it's, for me, it's not about the replacement of human labor, as much as it's about the displacement and the reconfiguring of human labor in these in these places.

Rose Luckin
It’s yeah, I couldn't agree more that displacement and reconfiguration is a really good way of putting it and it comes down to that relationship between the artificial and the human intelligence and, and how they can perhaps coexist, work. But also the way in which AI can really have negative impacts on the nature of somebody’s employment. And as you said, you know, relationships such as the one you have with your line manager, suddenly your line manager’s an algorithm. And actually, that's really hard!
And actually, I think that picks up nicely on what you said at the beginning, Molly, about the importance of humanities and social science approaches, because I know I've been in numerous meetings about AI where the people who are building the next generation of AI or whatever it is. And you know, super enthusiastic and they know they can build the technology to do X, Y and Z and then it's about, “Well, yes, but how might that work for the people, you know, and the importance of bringing in humanities and social sciences?” So I'd love to hear more from your perspective, and the perspective the British Academy with respect to that humanities and social science approaches, and where you think those disciplines can make the biggest contributions to these debates, which are extremely important for society.

Molly Morgan Jones
Yes, absolutely. And so I'll just I'll cover up a few areas, and I'm sure I'll leave some out. But so, you know… To start with humanities and social sciences are actually, as I'll start using throughout the podcast, we started to call them the SHAPE subjects. So referring to Social
Sciences, Humanities, and the Arts for People and the Economy, it's a really nice compliment, we think, to STEM, the two work really nicely together, hand in hand. And so I think they can contribute in a couple of different ways. So SHAPE can help us understand the impacts of changes to where, when, and how people work. You know, so, and again I think one of the things that's really interesting, and Jack, I think you were touching on this is, I think, for me, you know, AI is a term that covers a whole range of things in itself. But it really starts to spread out into other areas of data, and how data and the digital economy and the digital society is starting to create changes in the way we live and we work. You know, we just have to look around over the last 18 months in a pandemic. You know, obviously, we're doing this podcast and two years ago, when we might have been in the same room, we're all in three different places. We've got in an uptick in remote and online working learning, communities, and all of that is going to have implications for the rolling use of AI, in a whole range of different settings. As well as just more kind of virtual ways of going about our lives.
I think SHAPE can also help us understand intersecting inequalities that might be coming into play. And again, we're starting to touch on this, you know, a bit more, but providing insight into the groups that are most negatively impacted by some of the changes to work. And Jack, I liked what you said about sort of displacement and, and replacement and the distinction there, which groups might benefit. But, but actually, where you might get, you know, individuals who have, you know, multiple different inequalities that we might need to be attuned to. If someone is disabled and a woman and caring, and comes from a ethnic minority background, what do all of those different factors mean? And how do they come together? And that's again, where you can bring in insights from a whole range of disciplines, across the SHAPE subjects to try and understand what the individual factors are but really importantly, how they all might come together.
I think that the SHAPE subjects as well can, can really be integrated into how we think about the ethical and legal aspects of AI new technologies. So, the technical understanding of how AI works is important and we're funding lots of researchers who look into these things. So, you know, we've got one project, which, you know, the postdoc is looking at “AI and the right to reasonable algorithmic inferences”.
Which I thought was really interesting, sort of, you know, you’ve got to think about it a bit. But, you know, really trying to understand what are the ways in which we think about the algorithms which feed into AI? And and what does reasonable mean? And how do we make sure that the ways in which we're developing the technologies are taking into account range of circumstances? You know, again, how people adopt us and understand these technologies?

And then again, you know, Humanities has an important role to play here. I didn't know until recently, there's an Oxford Handbook of the Philosophy of AI. You know, and that's just kind of one way we could look, you know, historically also at different changes that different sorts of fluctuation points in society, and then think about, you know, how that's impacted on work.
And I want to come back to this idea about good work, because I think there's a link here as well to thinking we've been doing recently in a completely different program, but about the future of business and the future of cooperation and this concept of purpose. And I think there's a really interesting intersection here about AI and the purpose of business in society, which I might come back to in some of our later questions. But if we're thinking about sort of businesses that are operating with a purpose, and, and we're moving towards sort of a paradigm of a different way of thinking about the role of business in society, how do we think about sort of AI, both helping with the social purpose of business, but also not creating more problems by bringing that into the mix?

Rose Luckin
I was very struck by all of what you said there, Molly. But you mentioned the ethical and the legal and I wanted to come back to Jack, because your example of the gig economy workers… That has really pushed that debate about ethics and legality frameworks. How do we get this piece right?

Jack Stillgoe
I'm not sure, I'm not sure we can get it right. Right? Because I think I think part of the, the issue with this is that is that these things sort of resist easy fixes. I think if you're, if you're in the AI industry at the moment, you would love there to be a way that you could fix for some of those for some of those things, right? That you could, you could find technological solutions to those problems. And as Molly says, you know, those, those problems, might be ethically really problematic ones. To do with, you know, inequality, bias discrimination of various sorts that we already know AI systems, engender in the world. But those things are ultimately political problems. This is why the social sciences are so important here, right? Is that these things are not problems that can just be explained in engineering terms. And then, and then, you know, solve for X. These are political dilemmas where we need to articulate new forms of, of language, new forms of political debate in order to work out what is at stake, what the value conflicts are, and have those discussions.
So I mean, for me, I think there's been rather a lot of emphasis on AI ethics, and actually relatively little on AI politics. And I think, I think it's the politics that stand to be so important. I think you already see it, you know, in the way that world leaders approach the question of, of AI and recognize that it's, you know, going to be a source of enormous power in the future. And how that power plays out is going to be is going to be hugely, hugely consequential. So I think, yeah, getting the, you know, articulating those, those political dilemmas is a real task for social science. As is, you know, explaining, you know, doing the, the fine-grained work on what AI is actually doing in the world. And also what it what it is, right?! What, what actually constitutes AI? So, so a wonderful new book that came out relatively recently is Kate Crawford's Atlas of AI. Where she, you know, she's been involved in the AI debate for for a long time. And, you know, this, this frustration with a rather sort of speculative conversation about, about AI. And she says, “No, let's actually look at what AI is, what makes it up? What materials go into developing AI systems? What's its, what's its carbon cost, you know, how can we make it more sustainable? And also, what are the, what are the people doing behind the scenes, right?” So often, we pretend that AI is just a computer in a building, right? It's not, it's got a huge range of what some have called ghost work, right? Hidden labor going on in order to make our AI appear sort of magical. So there's a slightly sort of Wizard of Oz quality. And I think the other thing that social science can do is really explain to the world, the actual labor that goes into making AI. So this isn't AI's impact on work, it's the work that is required in order to make AI. It's the other side of the, of the curtain if you like.

Rose Luckin
Really important, and thanks for bringing that book up. I remember visiting Kate and seeing a wonderful poster illustrating some of that with respect to a particular AI device. And it's a really important point.
And I think the three of us understand the importance of interdisciplinarity when it comes to looking at AI and doing research. But just as a final question, I'd like to come back to Molly and say, I know from years and years of experience how hard it is to bring different disciplines together. You have lots of conversations, but you think you've understood each other, and you could have been talking for three years, and you get to the end and you think, “Oh, you were talking about something different!” Or you've been using the same words that actually means something… So I just wanted to come back and say, you know, how do you overcome those challenges of really trying to bring those disciplines together?

Molly Morgan Jones
It's such a great question. And I'm just smiling because I've been in the same situations. I mean, I think it's a, it's a constant process. Um, and, and there needs to be a real awareness by everyone joining into it, that there will be just very different disciplinary backgrounds and framings that come that everyone is bringing to the table and, and you've really got to keep an open mind and not be afraid to say “I'm sorry, I don't understand that word.” Or probably more importantly, “You've just used that word there”, and a good example is sort of escaping me but “ that means this to me, what does it mean to you?” And and that constant I mean, really almost like, every 10 minutes or 5 minutes in the conversation, just checking in. Because I think it's so easy to make assumptions, or completely forget about, you know, the words and the very deep end and, and rightly so, kind of meaningful, disciplinary perspectives, we each bring to this and not appreciate that someone else also brings that, but it might be coming from a completely different point of view. I mean, one of the things that I come up against, you know, I have done more qualitative work in my life than, than quantitative and, and just what is a meaningful result to me is a qualitative researcher is incredibly different from someone who comes from a statistical background. You know, I might look at five case studies and start to draw out, you know, conclusions. And, you know, a statistician might say, “What? That, that's totally invalid, you can't do that.” And so we just need to constantly be checking in with each other.
I should also just say, I just… I couldn't really agree more Jack, that AI politics is really important. And I think in a way, this point about AI ethics, so important as it is, has sort of dominated. And that we need to be spending quite a lot of time thinking about the laws, the regulations, that the scholarship from a range of perspectives that that really needs to start to come into the fold here.

Rose Luckin
Perfect! Perfect lead into the to the questions that I want us to explore now about future research. So, we’ve done this project, we found out many of the things that are going on. But what should we be doing next? And I think this probably is likely to pick up on the discussion you've already started Jack around politics and, and how we start exploring the implications. And how we, how we deal with the issues around AI way beyond just the notion of ethics and the legals.
So, Molly coming to you first, from your perspective, and I think in particular, when it comes to informing policy, what kinds of future research projects would you like to see?

Molly Morgan Jones
Um gosh, I mean, there's just a huge list! I'm going to try to pull out a few to get things going, but that we could perhaps start to develop our collective brainstorm as we, as we go. I think something that would be really interesting and unsurprisingly sort of fits in with other work we've been thinking about who within the academies policy teams, is looking at the opportunities and risks of AI and digital innovation more broadly and thinking about that, in terms of governance at different scales.
So, sort of questions about how can citizens and policymakers sort of be working together and of course with business as well, to harness these technologies to promote inclusivity accountability and participation across a range of areas? I think, in particular, I'm interested in thinking about that in the public policy and services delivery space. I think there's a really interesting question to think about the role of AI and work in relation to the leveling-up agenda, if we were going to think about sort of different regions and sort of levels of scale and also across different sectors of society as well. [We’ve] talked about it a little bit, but I do think work that sort of explores the sorts of intersections of inequalities. Digital inequality, poverty, and the digital divide, and how that might impact, you know, the ways in which AI is being, sort of, employed and rolled out in different sectors in different places. And then who has both access to the opportunities that might provide but also, you know, are most likely to suffer from some of the displacement or replacement that that might occur? I'll stop there. I could keep going on but I'll let Jack come in.

Rose Luckin
I’m sure we'll come back to you, Molly. Actually, one question that I think about a lot is this question of accessibility. And whether actually the people whose voice we need most of those who are least able to express because they understand so little about AI. I think it's really hard for many people whose lives are being significantly impacted to get a grasp of what this thing is. And as you said at the beginning, Jack, you know, we start off this Si-Fi vision, how on earth can people understand what it's really about? I think that's a huge challenge. But Jack, I think we're likely to be following up on your point about power. I think that's such a really important point.
But if you could grow a man Magic money tree? Of course we know there are no real magic money trees! But so you could you know, in the UCL quad? What sorts of projects would you want to see funded?

Jack Stillgoe
By my magic money tree? Yes, I think that… So let me, let me just start with that, that point about participation. Because I think it's so it's so crucial. And especially in the… So in the, in the context of work, you know, understanding what workers participation in the governance and development and deployment of AI systems looks like, is so important. And I think is a particular issue for AI, because so often, you know, AI is highly expertized. It's opaque. It's, it's seen as delivered from the top-down, right? And so it’s something that is done to people rather than done with them, or done, or done by them. And I think working out what a sort of more participatory alternative would be is vital. And I don't think that's, you know, I don't think it's, it's inevitable that AI has to be like that. I think there's a lot of at the moment, there are a lot of incentives for developers of AI systems to present their solution as sort of magical and opaque. And actually, you know, I don't think it necessarily has to be right? I think we can make it more explainable and we can make… Imagine more democratic forms of the technology.
But for me, ultimately, the big research agenda is to do with understanding who, well, understanding the drivers of artificial intelligence systems as they are with a view to analyzing who might benefit from from AI. Because at the moment, the risk is that it is, if deployed, as, as it currently looks, it looks like a force multiplier for inequality. And I think that should, that should worry a lot of us. So I think a lot of us in the social sciences and humanities should be asking the question of what an emancipatory version of the technology and its governance looks like, right? How can we ensure that systems are used in ways that benefit poor and marginalised communities that don't just concentrate resources, you know, financial capital, but also social capital within a very small number of organizations that, that own the data, the intellectual property, or even, the, the human resources to develop the systems? I think that's a really hard question. Right? Because… And for some of the reasons that we've that we've outlined, that it's, you know, things are developing really, really quickly. They are hard to understand, the debate is currently being controlled by very few powerful people. There’s a geopolitics to it, which means that there's the presentation of this sort of race to develop AI, that means that anything that offers a more responsible alternative is seen as somehow slowing down a country's innovative capacity, which governments don't like. But I think that's the debate that we have to insert ourselves into right? That debate about what an emancipatory version of this technology might look like, and how we can get there.

Rose Luckin
Such a good question. I love it! Emancipatory version of AI. Molly, would you like to come in on that? I feel sure you will.

Molly Morgan Jones
Well, I mean, I do think it's, it's, it's a really, you know, critical agenda. I mean, that topic itself just cuts across so many areas, actually, of policy, and, you know, thinking through a whole range of areas. Yeah. How do we get to? What are the mechanisms? What are the ways in which? And what spaces do we want to have more inclusive discussion participation in the way we develop policies across a whole range of areas?
I mean, one of the things that I think it was making me think about too, to sort of come up a level and then come back down is getting the balance right between some of the sorts of more short term, both policy but also research priorities, as well as the longer term ones. So, sort of, as we as, you know, well A, you know, as we as we come out of the pandemic, but we also, you know, AI is starting to be used in more places across more sectors at different levels. How do we also think much more longer-term, sort of the, the 2, 3, 5 even 10 years challenges ahead, but make sure we're not at a point in 10 years, where we're thinking about them now. And it's not so much about sort of crystal ball gazing, but I think there are things that we can be asking ourselves now that we might need a longer time to think through. I mean, one might be, are there some sectors of the economy or of society that, that we don't think AI has a place in, or has a different place than perhaps in other sectors? Are there some areas in which… You know, like, arts and culture is quite an obvious one that might come to mind. But even in those spaces, you know, is there a balance to be struck in those places? So sort of starting to have those discussions which do have to happen, I think openly, transparently, inclusively, across, you know, all different levels of society. And we sometimes hear, sort of, dimensions of place scale and time to think about some of our work. So it's thinking about things in different places, at different scales and across different time periods, both backwards and forwards. And so I think that really speaks to what Jack was saying about the inclusive participation.
And also that we, you know, we cannot walk blindly into this. I love that phrase, that force multiplier of inequalities, because I think, you know… I think as, as probably many of us have felt over the past 18 months of this pandemic, and a range of other things that have been going on in society, you know, our own biases, our own prejudices, our own sort of ways of looking at things, you know, really examining those and realising there might be ways of thinking about the world that, that hadn't occurred to us. So we’re looking at them quite critically/ I think we've got to start applying that across a whole range of areas looking outwards as we go forward.

Rose Luckin
Absolutely. And that point that you make about time, I think is also one that can be problematic in many different ways. But thinking about a more participatory approach, one of the objections that I certainly come up against a lot of times is “oh, but it takes so long.” And I often feel it's an excuse because people don't want to engage with the transparency that would be required to have that truly participatory process. But it is also true. It takes longer, the end result is much better, but it takes longer. So how do we how do we get around that how, how do we persuade people, that the extra time is really worth paying for so to speak, so that you get a much better outcome?

Jack Stillgoe
Shall I come in on that one, Rose?

Rose Luckin
Sure.

Jack Stillgoe
Just because, because I introduced the race metaphor, right.

Rose Luckin
Yeah!

Jack Stillgoe
And as soon as you talk in terms of races, right, making the argument for slowness, or making your argument that might lead to forms of slowness becomes a politically rather, rather hard thing to do. I mean, for me, you know, you have to recognize, I think, first that AI has no inevitability about it. That it is made by people for particular reasons. And once you get there, you can, you can understand then that AI, AI might be pushed in certain directions and not others. Right? So any claim that, that there is a race to a finish line is, is immediately falsified. Because at the moment, you know, we don't know where the finish line is going. Not everybody's racing in the same direction. And actually, we can probably presume that European, American, Chinese versions of AI already going to look, look rather different.

So with that in mind, as soon as you can sort of relax, step back a bit from the, the metaphor of the race, you can start to think about, well, you know, what does good innovation look like here? And then you can start making the case for responsibility and regulation, not as things that are trying to restrain innovation, but are trying to steer it in better directions. Right? And I think we are at the stage now, where we're sort of governments, companies, academics, we sort of all understand that AI left to its own devices is going to cause types of harms that really need to be understood and mitigated. So there is a role for, for regulation. And, you know, I think it's just about understanding then, or what, what sorts of outcomes do we want to see? And then how can we articulate those in technical and political terms?

Rose Luckin
That's a good way of countering that. Thank you, Jack. That's been a fascinating discussion. We've covered a lot of different areas. But before we close, I just want to come back to each of you, starting with Molly and say: “is there anything we haven't talked about that you feel we really should have covered today?”

Molly Morgan Jones
I guess if I, um, just go back to the very beginning and where we started… It's to say that though I in particular and Jack to some extent, you know, we've talked about sort of the importance of, you know, different SHAPE subjects and interdisciplinary between them. I think it's also, you know, this is about also the, the computer science, the technologies, you know, the STEM expertise that also needs to feed into this. And I think when we talk about interdisciplinary thinking and research and collaborations that need to happen is across all of the disciplines together. And so I guess it's just to really drive that point home. Because as we develop new things, and you know, we want to think about the path dependencies that we want to avoid, or perhaps those ones that we want to sort of stick on, it is, you know, it's gonna take everyone thinking about that together and all disciplines feeding in

Rose Luckin
Really good points. And I do love the idea of SHAPE subjects, by the way, I saw that one, I think it's a really good way of describing see how STEM and SHAPE. Really helpful. Thanks Molly. Jack, anything from you that you feel that we really should have covered that somehow we haven't?

Jack Stillgoe
Well, I need to build on what Molly was just saying. I think the issue and I think the frustration that a lot of us feel in that in the social sciences and the humanities, when we're talking about something like AI, is that, yes, you know, while it's nice to be part of that discussion, there is a sense that it's, it's a discussion that's being framed by somebody else. And so to the point about interdisciplinarity, right? The, the work of interdisciplinarity is also a debate about power, it's about which disciplines actually are doing, you know, are the ones with the most power to set agendas within universities, you know, to set policy agendas as well. There is a sense that, you know, the social sciences and the humanities are, are sort of playing catch up a little bit following, following the computer scientists and engineers around. And it would be nice to, you know, think about ways of reframing the debate that’s sort of social science and humanities first. Rather than just, you know, what are the implications of technology X, which is, I guess, the, the sort of common pattern that that people like me find ourselves find ourselves in. And maybe that's just about, you know, being clear that, rather than just talking about technological means, you start with the ends, and you think, “well, you know, where do where do we want to get to? And how might technology get us there? And if, if AI isn't the thing, or if AI needs to be reconfigured, then, then what does that mean?” Right? And that's, so that's the sort of much more radical agenda and requires, I guess, you know, social scientists being, being given the keys to the castle, which Molly would probably agree isn't going to happen anytime soon!

Rose Luckin
Would be great, though, wouldn't it? I love the idea of SHAPE leading STEM. Is that where we're coming from? Yeah, I think we are,

Jack Stillgoe
I think we are, yeah.

Rose Luckin
know, yeah. Yeah, that that would be great. And starting off with what you want it to be like in the end. That's a great place to finish, Jack, really interesting! I shall be thinking about that now.. How to.. Yeah, but it's not going to happen anytime soon. But it doesn't mean we shouldn't be trying to make it happen!
So many thanks, Molly and Jack. I really, really enjoyed that discussion on I'm sure the people listening to the podcast will too.

So, our guest today with Dr. Jack Stillgoe, Associate Professor in Science and Technology Studies at 911 and Dr. Molly Morgan Jones, Director of Policy at the British Academy. Thank you, both of you, and thanks to everybody for listening.

Molly Morgan Jones
Thank you. I loved… that was great.

Jack Stillgoe
Thanks for having us.