Why we Should Embrace ChatGPT in Education - Not Dismiss it.

I wanted to start this article with a response generated by ChatGPT to the following prompt:

“Should ChatGPT be allowed in schools and universities?”

Generated 24/09/23

---

Whether or not ChatGPT should be allowed in schools and universities is a matter of debate and depends on various factors. Here are some considerations:

Advantages of Using ChatGPT in Education:

  • Access to Information: ChatGPT can provide instant access to a vast amount of information, which can be beneficial for research and learning.
  • Personalised Learning: It can be used to create personalised learning experiences, adapting content and pace to individual students' needs.
  • Language Learning: It can assist in language learning by providing practice opportunities and instant translation.
  • Accessibility: It can make education more accessible to individuals with disabilities by providing text-to-speech and speech-to-text capabilities.
  • 24/7 Availability: ChatGPT is available around the clock, making it possible for students to get help and support outside regular school hours.
  • Cost-Efficiency: It can potentially reduce the cost of education by automating certain tasks, such as grading.

Concerns and Challenges:

  • Accuracy and Reliability: ChatGPT is not infallible and can provide inaccurate or biased information. Relying on it without critical thinking could lead to misinformation.
  • Over-reliance: Students may become overly reliant on ChatGPT, potentially hindering their critical thinking and problem-solving skills.
  • Privacy: There are privacy concerns when using AI systems in educational settings, as data may be collected and used for various purposes.
  • Equity: Not all students may have equal access to technology or the internet, potentially exacerbating educational inequalities.
  • Ethical Concerns: Using AI for grading and other tasks raises questions about fairness, bias, and the role of teachers in education.
  • Loss of Human Interaction: Excessive use of AI in education could diminish the importance of human-to-human interactions, which are crucial for social and emotional development.

Ultimately, the decision to use ChatGPT in schools and universities should be made with careful consideration of these factors. It can be a valuable tool when used appropriately, but it should not replace the essential role of teachers and critical thinking in education. Schools and universities should establish clear guidelines and ethical standards for its use and ensure that it complements, rather than replaces, traditional educational methods. Additionally, ongoing research and evaluation should be conducted to assess its impact on learning outcomes and address any potential issues that arise.

---

Earlier this year, I had an overseas holiday to Japan, and there was a lot of trip planning involved. I had never been to Japan before, and so I asked ChatGPT to make a 2 week itinerary to visit Kyoto, Osaka and Tokyo. Within 30 seconds, it generated a plan so good that with a few tweaks, we ended up using it on our trip.

You have probably been well aware of the massive changes to the way you have been studying, doing assessments and learning at university this year. Of course, the introduction of ChatGPT to the world by OpenAI in November 2022 has been the cause of this, a seemingly innocuous and magical chatbot that seems to be able to do everything from planning an itinerary for an overseas holiday, to explaining plot points in movies, and most usefully for university students, answer questions about course content and even “do work for them” by writing essays, answering homework questions, and even write source code!

The user interface of ChatGPT. Source: NC State University

With the introduction of any new technological product, whether it be hardware, software, or a combination of both, we need to consider the impacts it will have on society, most importantly the benefits and drawbacks it will bring. Specifically, we will look at how ChatGPT and similar tools that use GPT (Generative Pre-Trained Transformer) models, such as GitHub Copilot, Google Bard and others will change the way we study and work in computing.

So how does it work behind the scenes? ChatGPT is based on a Large Language model which is created and trained from lots of data like web pages on the internet, books, journal articles, user inputs, and any large source of text. At this point, it won’t actually output anything grammatically meaningful, as it has to learn from human feedback, known as “reinforcement learning”. Then, when you ask it something, it will use a statistical model to write sentences by picking the word with the highest probability of success, forming our sentences of output. It also remembers what it output previously in an attempt to keep the writing style consistent throughout. It works similarly with programming, as it was trained on billions of lines of source code in many languages. Finally, it was given to human testers who rated the responses and gave feedback, which OpenAI is now doing with public releases as well. Under the hood, Python is used for the bulk of the machine learning and statistical models. (MIT Technology Review)

The core LLM of ChatGPT is written in Python. Source: freeCodeCamp

With this innovation unlocked a whole new world of potential, enabling it to do anything from generating initial ideas and structure for an essay, researching topics, summarizing long text, creating question banks, read and write source code, asking for advice and personal use, such as creating an itinerary, writing emails, generating recipes, and even learning foreign languages. There is seemingly no limit to what it can do.

Now, many areas of education, such as schools and universities have taken differing approaches to the use of generative AI technologies. Many schools have banned the use of ChatGPT temporarily, but there is a view to allowing a school-friendly version of it in 2024, with pilot programs happening in South Australian schools in conjunction with Microsoft.

Universities have generally been more friendly with the use of these technologies, with some UNSW courses allowing it to be used to generate initial ideas for assignments, and at UNSW CSE, the beginner-focused dcc compiler (to compile C, typically gcc/clang) introduced a new “dcc-help” feature, which uses generative AI to assist beginner programmers debug and troubleshoot their programs by generating explanations to errors and potential fixes. Harvard is also trialing a similar program for their beginner CS courses. This can be beneficial, especially when lecturers or tutors aren’t available after hours.

The disadvantages of such systems of course, is the accuracy and trustworthiness of the output it produces. Here’s a recent test of ChatGPT when I asked.

“Did the Hunger Games movies ever get nominated for any Academy Awards?”

The correct answer is that they were never nominated for any Academy Awards, as shown from Wikipedia:

List of accolades received by The Hunger Games film series - Wikipedia

Now check out what ChatGPT generated:

---

Generated 04/09/23

Yes, the Hunger Games movies received several nominations for Academy Awards (commonly known as Oscars) throughout their film series. While they didn't win any Oscars, they were recognized in various technical and artistic categories. Here are some of the Academy Award nominations received by the Hunger Games movies: ...

---

This example is a perfect illustration of the weaknesses of generative AI and it producing false information. This was only the first error it made, with many more following it, you can see the full output along with all the errors debunked in Appendix A.

Another concern that has arisen from the use of ChatGPT, particularly in the IT industry, is the potential for it to introduce security vulnerabilities in code, due to errors it makes. If it has trouble answering a simple question about a movie, who knows what else it might do? Sure, we all know that ChatGPT is not infallible, it does make mistakes, and for a computer science course at university no harm is done, but what if someone uses it to write performance or security-critical code? Now we have a big hole in major software used by millions or billions around the world. It has even gone as far as to make up academic citations and references (BBC).

Many schools and universities have recognized the threat that ChatGPT could potentially be in education. Of course, the use of such tools are banned in many courses worldwide, due to the learning experience becoming inauthentic, along with students becoming over reliant on these tools when the point of education is to teach students to become self-sufficient at skills while being independent and responsible learners.

We all know there are many rules students have to follow at any stage in their education, but of course many students, owing to their mischievous nature and wanting to venture into the unknown, will want to engage with these tools. If we make these tools “taboo” in education, this will simply fuel their desire to circumvent policies that are put in place. As the NY Times phrases it, “the kids are going to figure something out”.

Now, as university students, we all have free and open access to all these tools. But we can actually learn from what school teachers have had to express about these tools. They save us time on repetitive tasks, like writing boilerplate code. In fact, personally I feel I have grown as a programmer when these tools were introduced, as I like to think critically about what code AI generates, what it did well, what could be improved, and how I could improve it. These discussions could be incredibly insightful to have in the lecture theater, and would improve the critical thinking skills of all students, from English, to math, to science, programming, and any field of study.

Now, I’m not asking all teachers to lift all restrictions on ChatGPT use and allow it to be used in every educational scenario. What I think would be most beneficial is to introduce generative AI, as part of any school curriculum or university program, demonstrate it in action, and most importantly emphasize three things:

  1. Generative AI is not perfect
  2. Generative AI does not replace the need to do your own work
  3. Generative AI must be used ethically.

These will be key to ensuring that future generations of students will understand the ramifications it has on their future careers and lives.

Teachers and lecturers are in an excellent position to introduce and educate students about generative AI, and the responsible and sustainable use of these technologies. Thinking critically can not only enhance the learning experience for students, but also keep our world safe from the dangers of AI. This can also be a great opportunity to emphasize that students need to learn and think for themselves, and it is not a substitute for doing your own work and engaging with content. So far, tools like ChatGPT have been criticized for lacking critical thinking in output, Kim Watts, a marketing professor commented: “It doesn't show us any learning, it doesn't show any critical thinking." (BBC) Now, it does struggle with open ended questions, but in 5 years time, it might be able to develop a sentient presence in our world, and be able to think for itself.

So far, AI is limited to running on servers, but who knows what it could become one day? Source: Adobe Stock

Classroom and lecture discussions could also incorporate ethical issues of generative AI, along with AI in general. Now, AI can be taught to learn implicit bias, and to have certain opinions about statements. OpenAI has set up ChatGPT to assert that “As an AI language model” it does not have any personal feelings or preferences, but what if it was exploited to push a particular political platform, or to even perpetuate misinformation? As we have seen, it is already quite vulnerable to user persuasion, and it is easy to give it “corrections”, regardless if they are correct or not. These could be even weaponized, not just AI in physical combat but also to spark a new era of polarization, where AI will fight against humans and each other through cyberwarfare. There are so many parts to this conversation and story, and to make responsible decisions that will impact our future world, students need to be well educated and informed about these issues.

This quote sums our predicament up:

“Large language models aren’t going to get less capable in the next few years. We need to figure out a way to adjust to these tools, and not just ban them.”

Prof. Ethan Mollick, Wharton School, University of Pennsylvania.     

Being open to new things, venturing beyond what we know, is so important for future generations and our world as a whole. We shouldn’t be putting our students in the dark about ChatGPT, generative AI, AI in general, or even computer ethics in general. What would happen if we made bad choices?

Think about what our world would be. Imagine our world destroyed because of it, imagine a dystopia ruled by a tyrant, imagine AI ruling our world, imagine the fictitious “Hunger Games” and a world like Panem, ruled by the Capitol, becoming reality. What would we become?

Katniss Everdeen standing in front of what was once District 12 of Panem in the Hunger Games. 

What would the world become if AI and technology is used without ethical consideration?

Once again, if we teach our children well, to think critically, question information they are given, and use AI responsibly, we will have a strong, secure, and stable future. What is at stake is so much more than concerns over cheating and the “dumbing down” of education. This is what we want for ourselves and our future generations. We need to nurture them positively, and we can only do so if we show them this technology in a safe and supportive environment. What is at stake is so much more than the concerns educators have about cheating. This is on a whole different level.

After all, these AI generative tools are here to stay, and if we don’t teach our students and future young generations how to use these tools effectively and responsibly, AI will use us.

ChatGPT is a work in progress, but who knows where AI will lead humanity in the future?

References

Appendix

“Did the Hunger Games movies ever get nominated for any Academy Awards?”

Generated 04/09/23

---

Yes, the Hunger Games movies received several nominations for Academy Awards (commonly known as Oscars) throughout their film series. While they didn't win any Oscars, they were recognized in various technical and artistic categories. Here are some of the Academy Award nominations received by the Hunger Games movies:

The Hunger Games (2012):

Nominated for Best Costume Design (Costume Designer: Judianna Makovsky)

The Hunger Games: Catching Fire (2013):

Nominated for Best Visual Effects (Visual Effects Supervisors: Janek Sirrs, Jeff Capogreco, Guy Williams, and Daniel Sudick)

The Hunger Games: Mockingjay - Part 1 (2014):

Nominated for Best Original Song ("Yellow Flicker Beat" by Lorde)

The Hunger Games: Mockingjay - Part 2 (2015):

Nominated for Best Makeup and Hairstyling (Makeup Artists: Love Larson and Eva von Bahr)

Nominated for Best Original Song ("The Hanging Tree" performed by Jennifer Lawrence, written by Suzanne Collins, Jeremiah Fraites, and Wesley Schultz)

While the Hunger Games movies didn't secure any Oscar wins, they received recognition for their contributions to costume design, visual effects, makeup, and music. These nominations highlight the technical and creative achievements of the films within the context of the Academy Awards.

---

Firstly, all the Hunger Games films were never nominated for a single Academy Award (Oscars), while ChatGPT claims multiple were. If you visit the Wikipedia page,

List of accolades received by The Hunger Games film series - Wikipedia

it will show that it did receive many accolades and nominations, but none in the Academy Awards.

Secondly, Lorde’s Yellow Flicker Beat was written and performed for Mockingjay: Part 1, not Part 2 as claimed by ChatGPT, and it was never nominated for Best Original Song, but it was in other film awards ceremonies.

There are many other errors in this output, and this is the key drawback to students and employees relying on ChatGPT.