Featured Item
Man vs machine – how ChatGPT is changing the face of learning
Everyone is talking about it, using it, feeling excited about its potential, or worrying about the impact that ChatGPT is going to have on our world.
This text-based artificial intelligence (AI) tool can answer questions and write essays, poems, or even film scripts. It’s a chatbot designed to interact with humans in a conversational way. Significantly more advanced and creative than its predecessors, ChatGPT has rapidly gone viral, igniting ethical debates, especially for educational institutions and the working world.
Having only been available since late November, ChatGPT has fast demonstrated just how powerful AI can be. Capable of generating text in a wide range of styles, ChatGPT was developed by OpenAI, an American-based AI research and deployment company headed by Jewish tech entrepreneur Sam Altman. Currently freely available, it opens a world of possibilities, but also raises concerns about educational integrity and AI replacing humans in the working world.
ChatGPT works on large language models, computer programmes that promote text understanding and generation in software systems. Having been trained on masses of text, it has in a way developed an understanding of text, and how to identify important points, says Professor Benjamin Rosman. It can therefore generally give you a coherent answer to a question. Rosman is a professor in the School of Computer Science and Applied Mathematics at the University of the Witwatersrand (Wits), where he runs the Robotics, Autonomous Intelligence, and Learning (Rail) Laboratory.
If you input a block of text, you could also ask ChatGPT to improve on it or write it in a different style, he says. “It’s very flexible, and what’s quite remarkable is what can be achieved by understanding language. This isn’t a robot, it doesn’t have any connection to motors or to images or sound, it’s purely working on text and as such, is able to communicate with people in many different and interesting ways. I think it’s transformative in what it’s capable of doing.”
Tech giant Microsoft views its potential as unlimited, and is to lay off 10 000 employees, redeploying billions of dollars into OpenAI, excited by the possibilities of ChatGPT and future chatbots. Yet, as Mike Abel, the founding partner and chief executive of M&C Saatchi Abel, argued on social media this week, AI like ChatGPT will potentially demolish jobs faster than alternative sources of employment can be found.
“There are going to be some huge ethical questions and humanitarian, employment challenges being asked of us soon,” he wrote. “It’s going to be a real people-over-profits conundrum. I’m all for progress, but I’m more for people.”
It’s also raising questions around plagiarism and cheating in universities and educational institutions, especially with the rise of take-home exams during the COVID-19 pandemic. “ChatGPT can do pretty well on take-home exams on a lot of different topics and a lot of academics are freaking out about this,” says Rosman.
Yet, he argues, academics have faced similar issues for years, whether it be in the realm of exams or in academic essays. “There have long been different platforms online where students can do things like upload questions and other people online can answer them,” he says. However, that’s not to minimise the risk such AI poses.
“People are using ChatGPT to help refine their language which is a great use of the technology, but there’s certainly the potential for cheating or plagiarism,” says Rosman. Yet it also opens the door to new forms of education, especially when it comes to critical thinking around whether we accept everything we read. “I think that’s a culture that we’ve been moving towards anyway, but this perhaps accelerates that,” he says. “Perhaps being in invigilated settings is a better idea for certain projects, or we can set questions that might be harder for ChatGPT to answer.”
Though there’s been talk about building models that can determine whether text is AI generated, teachers would usually be familiar with a student’s style of writing, educators argue, even if these tools do add a dose of complexity. “Teachers know what their students are capable of,” says Rob Long, the director of academics at Yeshiva College. “It’s not difficult for staff to recognise that this is not this child’s essay, for example.”
Yet, Long argues, such tools can also be used positively, by for example, asking students to analyse and engage with AI-generated essays. “We have to get used to this fast-paced, changing world of technology,” he says. “We want to encourage kids to use their own thoughts, and it’s just about finding ways in which we can try and do that better in this context.”
Andries van Renssen, the executive director of United Herzlia Schools, agrees. “We’re investigating the dangers and the possibility of misuse of this new technology, but we’re approaching this with positivity because this is a gamechanger both for schools and individuals,” he says. “Our approach is not to control but harness this to our advantage.”
Professor Diane Grayson, the senior director of academic affairs at Wits, points out that ChatGPT has strengths and limitations. “There’s definitely a place for such technology, but as a tool, not as a substitute for people doing their own thinking and writing. Preventing misuse is directly related to taking a whole-institution approach to promoting academic integrity, including in the design of assessments.”
Could ChatGPT replace us all? “It is capable of being creative,” Rosman says, “which is something that many laypeople didn’t think that machines would be able to be. That’s quite scary for some people.”
However, human input is still central to ChatGPT’s efficacy. “You often need to do what’s known as prompt engineering,” says Rosman, “which means you have to put a lot of effort into what is the prompt or input that you give to the system to produce the desired output.” You need to tell it to write the essay in the style of Ernest Hemingway, for example, and if you’re not happy, you need to direct it where to expand on a concept.
“You can’t use its outputs blindly, you should know something about what it’s talking about to see if it’s garbage,” he says. “At some level, because it has learnt from texts online, these are likely to agree with one another, so it might not be paying attention to the right points.” This also rases concern about reinforcing biases. “Ultimately, it isn’t human, and while it can give a lot of insights into what humans are interested in, it’s not perfect.”
This is just the beginning, though, as the technology ChatGPT uses is only in its infancy. “People are going to innovate, it’s going to be easier to build products, start companies, and build productivity and entertainment tools,” says Rosman. “Before, someone might have a crazy idea but they didn’t have the resources to make it happen. Now we’re starting to get to the point where these tools are available, and if you’ve some sort of exciting idea, you can actually try and build it.”
“We’ve just got to be careful to use it in the right way, as is the case with any powerful technology.”