From BBC Science Focus magazine, this is Instant Genius, a bite-sized masterclass in podcast form. I'm Alex Hughes, staff writer at BBC Science Focus magazine.
This week we're talking about chat, GBT, and its newfound role in education. The artificial intelligence chatbot allows users to generate jokes, website design codes, essays about complex scientific topics, and everything in between.
With all of this available in seconds by a simple word of prompt, there are growing concerns that could prompt plagiarism, misinformation, and cheating in the education system. I'm joined by Sam Ellingworth to discuss this issue.
He's an associate professor in the Department of Learning Enhancement at Edinburgh NEPU University. He tells me all about chat, GBT, and its role in education, outlining why we should learn to embrace it and better understand how it works.
So as chat, GBT grows in popularity and interest, how much do you think it will implement itself into the education system? It's not just chat, GBT, but all of these new artificial intelligence tools and machine learning that are rapidly fighting the way into education.
I think it very much depends on the level of education. So to some extent, it might depend on whether we're in secondary schools or primary schools or higher education. As tends to be the case with some of these things, we might see it proliferate first in higher education and then come down and certainly I'd feel most qualified to be able to talk about how education.
I think that we're already seeing some, or I would say knee jerk reactions to this. Some examples of students may be using chat, GBT in an unethical manner, but also ways of academics thinking about how we can use it as an opportunity, as well as a challenge.
And for me, having ruminated on this for quite a while and talked to colleagues about it, it very much feels like week and maybe later, much later, strong AI I hear to stay.
And that actually, why try to fight them? I mean, these are tools that our students will be using in the workforce. So it seems very strange to me to almost say, but don't use them for three years. Just not pretend they're there and then you can go away and use them.
These are things that have the potential, I think, to reduce workload, to improve efficiency. And our responsibility as educators is to think about how we can utilize them to help ourselves, work more effectively and smartly, but also to prepare our students for the workplace and to ensure that they get the most effective and engaging and rounded education that they can.
So when we're looking at these kind of tools, do you think it's less so about trying to find something or to tell people we don't use this? This isn't something we use, but instead to sit students down and explain to them, okay, this is what this does, and this is how we're going to use it in an effective way.
I think when we work with our students with regards to what chat GPT is, it's really interesting to potentially pull apart what it's doing and why it's doing here. And I've heard lots of people say, well, only IT students or computer science students need to know what's going on under the hood.
我认为,当我们与学生一起探讨聊天 GPT 是什么的时候,了解其运作方式和目的是非常有趣的。听到很多人说,只有 IT 学生或计算机科学学生需要知道底层的工作原理,但我觉得这其实是不正确的。
But to some extent, the things that are happening with machine learning algorithms and with weak AI like chat GPT require a level of complexity that not necessarily all undergraduate students would have, but I think it's important that we understand what the founding principles are and talk about what it means to be trained on a data set, talk about what the limitations are, talk about what the ethical challenges are.
So for me, I'm not really that concerned about plagiarism. I mean, plagiarism is all just the printed press. This is all just the education system. We recently had the idea of contract cheating and other ways of plagiarizing.
For me, the bigger thing that we need to discuss is the potential bias of weak AI. So these things are ultimately trained on data sets. The chat GPT, arguably the data set is the whole of the internet, but we know that the whole of the internet is not necessarily a nice and certainly not an equitable place.
So our students need to be aware, as do our colleagues, that any responses generated are biased by what goes in and this will continue to proliferate. So if with, for example, talking about the dominance of Western science, as opposed to other indigenous knowledge, then if we continue to use chat GPT or other weak AI to generate responses to questions, because that area, i.e. Western science, has more written space, has more, I guess, headline has more front page, so to speak.
It's going to continue to do so. So for me, we don't necessarily need to teach our students not to use chat GPT to plagiarize because I think they know that. But rather, we need to think about what are the biases that are implicit here and what does that mean and how and why should we challenge them?
That's interesting. So it's not so much about issues of plagiarism and cheating as it's about the actual content that it's producing.. I think that's definitely true. And you know, there's huge drives rightly so for diversifying the curriculum, decolonising the curriculum, making more equitable the curriculum. And we're just starting to get our head around that and there's a lot that we need to do to improve it. But the problem is that if we're just continuing to use things that are reinforced by these old ways, there are colonial and there are systematically racist and misogynist, then that's just going to proliferate.
With regards to plagiarism, I think it's a much deeper question. It makes me sound when people assume that all students do this, they don't. If students are plagiarising, there's normally a reason for it. And as educators, we need to understand that. There's been a lot in the past few years about students who have entered into contract cheating negotiations where they pay somebody to do the work for them and then there's people blackmail them by telling them they're going to get in contact with their university or college unless they pay them extra amount of money and that's horrific. So rather than throw the book at students, we need to find out what it is.
And similarly, there's a cultural thing here as well so that some students come in from different cultures and from different countries. Plagiarism is different there and the way that plagiarism is expressed is different. And there's a danger that again, we can be very righteous in the UK and in the West. That our approach is the best. Whereas actually what we should do is we should try and understand why our students might be using chat GPT for plagiarism, not all of them. And talking about that and using that as an opportunity for opening on this dialogue, rather than saying you've used chat GPT, we're going to expel you from this course or rather than assume from the beginning that all students are going to do this because that's just not the case.
And interestingly, there's been a lot of talk about chat GPT being used more as a marking tool or a framework to work off of for students. I think that same issue then applies just on to the other side. If a student, maybe the work they do doesn't line up with how an AIC is the correct answer to be. And that's a really good way of looking at it. And I think having looked at several examples of work and several examples of creative writing as well. For me, I think it is sometimes obvious where chat GPT has created something. I often get it to respond just in testing to certain essay questions. And it delivers what I would classify to be a good A level response. So a response of a good 16 to 17 year old student.
So let's present one side of the argument. Let's present the other side of the argument. And then let's have a neat conclusion at the end. Whereas we know when we go into higher education, we're expecting our students to do more than that. And we're expecting them to challenge and we're expecting them to analyze and to evaluate and to ultimately create as well. And I think that if you are worried about the plagiarism aspect of chat GPT, there's many different ways in which you can design assessments so that they can't be plagiarized.
You know, we talk a lot in education about authentic assessments. And this is assessment that is useful for students and directly relatable to both their lived experiences and also the kind of work that they're going to be doing in the workplace. So we might ask a student to write a essay about a subject which theoretically could be plagiarized by weak AI. But we could put a tilt on it by asking them to contextualize that with their own experiences or with an event that happens to them in their lifetime.
Similarly, if we were on a module or a program for which essay writing wasn't appropriate, we might instead think about doing live projects or projects that involve community work or working as part of a team in which you just couldn't generate or pre-generate an answer because it was such an organic and evolving question as well.
Expanding on what you were saying earlier, we mentioned issues around racism and the difference between different parts of the world. But the model in itself often makes mistakes in general format where if you offer too many problems it can misunderstand or it doesn't have a knowledge after a certain time period. Tune as also I guess the issue of a level of misinformation spreading into education if you use it to teach students or if you students used it to create work or it's used to mark things, these kind of things could easily slip through.
That's a really good point and actually, I think that part of the role of education, and certainly higher education, is to equip our students with the confidence and the skills to challenge.
And I'm not talking about, you know, scatagon conspiracy theories here, but I'm talking about the idea that you shouldn't take at face value anything that you read, and that our students shouldn't be equipped with facts when they leave the university. They should be equipped with the analytical skill set that they need to be able to make sense of the world around them, to question it, and not just a question fact but also to question injustice.
And therefore, when our students see politicians or sometimes journalists making claims that are lacking logic or truth, then we've equipped them with the skills to point that out. Similarly, when they see responses that have been pre-generated by a machine as being grounded in falsity or bigotry or mistruth, then I would hope that as educators, we have equipped them, no matter what their discipline, with the skill set that's needed to question, challenge, and ultimately change.
In a recent sort of period of chat to you BT, they've announced that it's going to become a paid to use tool, so I mean it's been free for a while, but obviously, if they might mind if you put into it, it was obviously at some point they start to charge. That then raises some questions around equality, where if you're telling students as this tool they can use, but they now have to pay a monthly subscription for it, that separates some students.
Definitely, and I think that, for one of a better phrase, the issue of digital poverty or digital equity is something that we really need to be addressing. And you know, on the one hand, we have this, so students who can access this, I mean a little further down, we'd have students who can pay for Grammarly, for example, which is a fantastic online grammar editing software, but then we also have students who are able to let's say access Google's open computer coding software like the pro version of it, students who have the money to be able to access software for which the university doesn't have a license to all the way down to what about those students who are unable to afford a reliable internet connection or are unable to afford their own laptop.
So if you think the whole gamut, and this is exasperated even further in secondary and primary school education as we saw during the pandemic, it's all very good saying, "Oh well, students can learn in a virtual environment" but what happens if you've got one laptop between six siblings with a prepaid internet connection that's not very good or a prepaid electricity media that's run out because of the escalating costs of electricity and power consumption in the UK?
So you're absolutely right that this raises questions of digital equity, and I think it puts to the forefront again that actually, the internet, it should be a right, not a privilege. Access to people often scoff at politicians who say, "Oh well, why are you making a high-speed internet freely available to everybody?" That's like saying, "Why are you making books freely available to everybody?"
There's the entire history of human learning online, and it is a basic human right in my opinion to be able to access that internet and to be able to have access to it. And I think that the debates that we're having now around digital equity with regards to Chat GPT.
I have to say that the marketing team there has played an absolute blinder in terms of how they made it free, you've got this incredible publicity and then a charging, but it raises these questions that I think strike a much deeper chord with digital equity more generally and the role that that has in education.
We're talking here mostly about I guess younger students and I think that's where a lot of the conversation about chat GPT and education is where students are in the stage of their life where their learning creativity their learning important skills.
And I think this is also where wherever or not it's true or not there's the most concern about plagiarism if you were to I guess jump right up the educational tree and you start looking at let's say a PhD essay or someone doing their masters where education is more of a choice and where they've developed a lot of their core skills during that's where chat GPT in its format can be the most helpful in a way where it's simply doing a little bit of heavy lifting in the background.
It's helping people with the extra bits of work they might need to do on the side definitely you know I think a very obvious way that you can get chat GPT to do work for you is you know referencing you know so you can give it a you can say here's a reference please put it into this format for me and I know that this software that does that in like a document editing software as well but it's a really neat way of doing it likewise not it's a really great way to fill a blank page and I use it for this sometimes.
Definitely you know I'm I'm a researcher I'm also a poet as well and I find chat GP actually pretty terrible at writing poetry but it's a great creative spark because if you ask it to write a poem on a subject most of it is absolutely junk but there'll be one line in there where you think I know what there's a phrase or there's an idea that I can use.
And similarly if I want to write a literature review or if I want to write an introduction or if I want to write a challenge on something on an overview it's a really great way of just getting something on the page and then the role of the human is to go and to add that individual voice so you know if someone was to look at my academic output it'll be very obvious if I was to just use chat GPT to write a paper for me because I have a I have a unique voice as all of us do we have a unique written and oral voices.
But exactly as you've highlighted it's a really powerful tool to be able to do some of the I guess more administrative side of research and scholarly practice providing of course that we make very clear what belongs to us.
I mean there's also an ethical dimension here in that chat GPT is of course given a response based on all of the input that's gone into it so if you were to ask it a very bespoke question then there's a danger that it might only be able to draw on a couple or one piece of research and therefore it's plagiarising that but I think that if you were to instead use it as a tool to phrase broad questions or to do broad ideas I think it is a really powerful prompt and again not just for scholarly pursuits but as a seed for creativity not to replace creativity but as a seed.
I don't think you see this has much to chat GPT but it's been an issue of other I guess popular AI programs that to develop the problems that's doing and the work that's coming out of it it's had to take in ideas from somewhere else so especially with images there's been that problem where I guess you see watermarks and you see people's style coming through I don't know how what you think about this but the idea that chat GPT is in itself plagiarising other people do you think that then is a problem in itself that you're as a self-attrating plagiarism as it takes things then someone else takes things from chat GPT.
I think that's a really good point and I think that these are the reasons why with any tech there needs to be like a serious ethical committee to talk about these things.
So for example if you were to make your work freely available either written or visual or creative via certain creative commons licenses then it would be fair game but you'd expect an acknowledgement as well that's why I think that the way in which this does and the way in which you present the work as well is very very important and even though some responsibility a large responsibility should lie with the people as in us the users a huge responsibility should also lie with the companies to make sure that as well as moving away from implicit bias and outright racism xenophobia Islamophobia misogyny we also need to make sure that it takes into account the fair use policy and that it is treating people's work in an ethical and appropriate manner.
This is I guess the first major iteration of both chat GPT and any chatbot that works to this level but we're still fairly early on in the life of AI do you think there's then this risk of the technology blending in more and more over the years and then right now as you say it writes to a certain level it can its mistakes can be quite obvious it's attempt to creativity can suffer.
Well at what point is it that it just starts to blend in and it is hard to tell the difference well I think it goes back to what we're talking about the beginning that this is why you can't stick your head in the sand and why you need to talk about these things and not just pretend that higher educational education exists in a bubble outside of society it is society and if instead we talk about the pros and cons we use it we challenge it we investigate it we analyze it we create with it then that won't happen.
You know that analogy I guess of a frog in boiling water if you put it into boiling hot water it'll jump out straight away whereas if you put it in and just ever slightly raise the temperature or cook without raising it as an analogy so it's that like we need to be talking about these things we need to understand what's going on it is part it's like saying the internet we can't even though again some countries do have precedence for this we can't just shut down the internet when there are exams on because we're worried about the students cheating it's about talking to our students talking to our learners engaging in open dialogue talking about what the limitations of chat GPT are.
What the opportunities are and then also feeding that back to the creators of chat GPT to say look these are some of the challenges that you need to address but also have you thought about using it in this way as this is something that might have a deeply profoundly positive impact on education and the wider society so let's zoom out and look at the future.
Let's say five to ten years from now maybe no longer what do you see as the relationship of AI and education how do we address the future that what should educators be doing.
Do you think that's a really good question so I think it will depend very much on the educators I mean let's look at the internet I mean do we to what extent do educators sit down in the classroom and say let's get browse through the internet that's not really what we do it might have what we it might have been what happened 30 or so years ago in the internet was first coming into fruition but what I see I see some educators won't have changed because that's what happens I see innovative educators using this as an opportunity to challenge the limitations sometimes of assessment in education like what is the purpose of assessment do we need to even have assessment in the first instance.
How can we make sure that our students are equipped with the skills that they need to enter the workforce and to be more rounded and effective and happy citizens and I'd hope that AI rather than often in a panacea or ultimately you know being the devil or the devil in disguise to all of these things is just part of that discussion and if I'm thinking optimistically I think that this will enable us to have some difficult conversations about what the role of education and in particular assessment is and in a dream case scenario.
I'd like to think that in 10 years time even though we're not necessarily all of us explicitly using AI on our assessments the opportunities and the challenges with which AI have presented us have meant that we've created new ways of learning that are more equitable that are more engaging and that are ultimately more authentic.
Thank you for listening to this episode of instant genius that was sound ill in word talking about how the chat's GBT will affect the education system.
The instant genius podcast is brought to you by the team behind BBC Science Focus magazine which you can find on sale now in supermarkets and news engines as well as on your preferred app store.
这个瞬间天才播客是由 BBC 科学聚焦杂志团队制作的,你可以在超市、新闻引擎以及你喜欢的应用商店购买到它。
Alternatively, you can come and find us online at sciencefocus.com.