Majd Alsado

_

in defence of plagiarism using LLMs

24 Jan 2023

Luddites destroy industrial machines, 1812

Luddites destroy industrial machines, 1812

💡 intro to the problem

Plagiarism in academia has been a long-standing problem, but with the rise of advanced language models, the issue has taken on a new form. With chatGPT’s ability to generate comprehensive reports, essays, and other written assignments, students can now easily pass off AI work as their own. Needless to say this is a cause for concern for many educational institutions who look to ensure that their students are indeed learning and honing their skills in their field of study.

🤔 is it plagiarism?

plagiarize, verb, pla·​gia·​rize
to steal and pass off (the ideas or words of another) as one’s own

merriam-webster

I’d argue it’s not.

LLM generated text can be looked at as the modified output of a prompt you put in, not a direct copy of someone else’s work. It’s a net new piece of writing generated by a model trained on billions of existing text. In this way, it is similar to using a thesaurus or a grammar checker, tools that are commonly used to improve writing and are not considered plagiarism. Is Grammarly considered plagiarism?

❕ we’ve seen this before

It may seem like this is something we’ve never encountered before, but it’s important to remember that it’s not a new phenomenon. The introduction of early calculators and newer mathematics engines (Wolfram Alpha and the like) was a similarly difficult time for educational institutions to adapt to. After all, students can now pass off complex mathematical computations as their own! However, as time went on, these tools were embraced and are now an integral part of the education process.

In the same way, using language models to generate written assignments can be seen as a tool to improve writing skills and creativity. For example, it can help students to generate new ideas and perspectives, or to express their thoughts in a clearer and more concise way.

📚 what can educational institutions actually do

Not much. And they probably shouldn’t try.

As technology advances, AI-generated texts will become more and more indistinguishable from human-written ones, making it incredibly hard to differentiate plagiarism from original work. And heads-up… we’re pretty much already there!

Instead of trying to fight the waves of technology, it’s important that we ride them. This means embracing the fact that students no longer need to worry about semantics and language when writing reports, and instead get to focus on the content and ideas. This shift in focus can actually benefit our society as a whole. For example, with the help of LLMs students can now focus more on the research, analysis, and critical thinking aspects of their assignments, which are the key elements of academic writing.

🌀 in conclusion

In the future, we’ll look back and wonder why we ever cared about this problem. Advancements in technology have now allowed us to move beyond the limitations of language and focus on the important aspects of ideation, creativity, and output. The ability to generate written assignments quickly and efficiently will lead to large increases in productivity, and we’ll be able to focus on more important tasks and projects.

The risk LLMs present today for plagiarism is scary and educational institutions are in an understandably difficult position. While it’s a natural reaction to try to knee-jerk ban this technology, I don’t believe that is appropriate nor practical.

Educational institutions instead need to find the right balance of guidelines and procedures for the use of LLMs, treating them not as plagiarism of ther people’s work but rather as a tool to improve writing skills and creativity, advancing humanity to our next evolution.

🔭 risks and the future

One risk I see rising in the near future, is that when the amount of AI-generated text overtakes original human text, will we see ourselves in a silo or echo chamber of the same type of text being rewritten over and over? A silo of creativity?

Or, is this just us as humans egotistically believing that an AI cannot be more creative than us?

I don’t know the answer, but I’d love to hear what you think.