The days of imagining a distant future where artificial intelligence renders established industries extinct are past us, as ChatGPT sweeps its influence across academia. In a world where a chatbot is capable of imitating the written human voice and tonality, our very notions of what we declare “human” are challenged. This also has an all-encompassing impact on the ways we perceive knowledge and the value that we attach to it.
Artificial intelligence replacing human presence in several domains of life is not a recent conversation. Traces of such discussions can be mapped back to the earliest days of the Industrial Revolution when human labor was slowly but effectively being replaced by machines. However, almost two centuries later, this conversation is no longer restricted to physical labor. As machines develop forms of consciousness that resemble the human ways of thinking, now closer than ever before, OpenAI’s ChatGPT propels us into a new world where humans and AI must coexist. It is not about competition anymore, but interdependence.
ChatGPT is not going anywhere. Even its harshest critics must accept that its versatile functionality and instantaneous responses make it a highly effective tool of assistance. Its mobilization by students, from writing mundane emails to writing school essays, and even to answering job interview questions signifies its role within the professional world, unsettling as that may be. It is only by confronting and recognizing its multi-functional productivity can we begin to untangle the harmful ways in which ChatGPT is changing the way we perceive education. If students are using it as a tool to find shortcuts to serious academic and practical training, it is bound to have far-reaching consequences across all domains of academia in the coming years as students transform into professionals.
In providing students a pathway in which they do not have to invest the same time and energy to consume and engage with knowledge, ChatGPT has been revolutionary. Many students argue that it encourages them to be more productive, allowing them to keep up with the pace and intensity of today’s world of academia. Instead of surfing on the internet for hours, sifting through research papers and articles themselves, ChatGPT does that exhaustive labor for them. By browsing and gathering information publicly available on the internet into its repository of data, ChatGPT aids students to streamline their approach towards knowledge production, aggregation and collection. Therefore students can block out irrelevant pieces of information, and retain what is necessary within the confines of the given academic deliverable, be it an essay, or a response paper.
However, despite its many functions, one must ask: what is the price of such convenience? By outsourcing this academic labor to a chatbot, students are essentially transforming the ways in which knowledge is produced and circulated within the professional world. Hereby, one must acknowledge that the established modes of knowledge production in the contemporary world often reinforce oppressive power structures of white, Western and Eurocentric knowledge. However, despite such marginalizing realities, the foundational value of any effective mode of knowledge production is academic integrity. If a student outsources their academic assignments to a chatbot, they are also outsourcing the opportunities of critical thinking which such an assignment might offer to them. Even if such a pathway results in short-term academic success, it results in a complete failure of critically evaluating one’s available resources.
This widespread use of ChatGPT is also a symptom of the ways academic institutions have treated education as a means of regurgitating conventional learning methods instead of prioritizing innovative strategies that value critical thinking. If, in this day and age, an artificially intelligent model can produce effective and sufficient responses to a prompt designed for testing human intelligence, it means that there is a need to investigate and revise what kind of learning strategies such academic institutions are mobilizing in the first place.
Although academic integrity is an essential foundation for any institution and individual student, ChatGPT also poses a threat to individuality and creativity. Using a chatbot to write an academic essay defeats the purpose of the task, not just because of the limitations of the new software and its incapability to be comprehensive or even cite correctly, but also because it considerably hinders the growth of critical thinking and logical reasoning skills.
While ChatGPT is increasingly relevant to the education and personal growth of individuals, it is also a significant subject of political discourse. “Prompting ChatGPT with 630 political statements from two leading voting advice applications and the nation-agnostic political compass test in three pre-registered experiments, we uncover ChatGPT’s pro-environmental, left-libertarian ideology,” read a recent research paper by a group of
German researchers. A largely prejudiced technology sweeping the globe shares remarkable characteristics with past political data disasters. In the 2010s, Cambridge Analytica and Facebook went through a similar experience, and ChatGPT has the same ability to influence prospective political outcomes. Not to mention how dangerous AI systems pretending to provide factual data under clear political bias can be.
On a similar note, a recent study by
Blackberry reveals that “51 percent of IT decision-makers believe there will be a successful cyberattack credited to ChatGPT within the year.” It is much easier for users to create legitimate-looking phishing emails and malware, leading to an increase in cyber attacks such as theft of data and identity through phishing and denial of service, potentially changing the cyber threat landscape.
ChatGPT, a hotly debated subject since its birth, has revolutionized the field of artificial intelligence. The fact that it was created to interact with users through a continued conversation and dialogue is what makes it more intriguing. As a consequence, it is likely the defects and inconsistencies that academics and researchers have been pointing out will be fixed in upcoming iterations of this brand new language model implementation. Whether we need to be alarmed by the realities of artificial intelligence or not, only time can tell.
Shanzae Ashar Siddiqi is Senior Features Editor. Ibad Hasan is Senior Opinion Editor. Email them at feedback@thegazelle.org