Note: These ideas are worth what you pay for them.
GPT-3 Enabled Plagarism
Every few months there’s a new set of handwringing over plagiarism enabled by GPT-3. It’s been two years so that handwringing has even reached philosophy departments. Sarcastically, a voice inside me said “EZ, Just use GPT-3 to detect plagiarism”. Then, another voice inside me, one I hadn’t heard from since my time working on neural network optimal control pipes up and goes “I mean, we can”, and now here I am writing a blog about it.
A Rough Sketch of a GPT-3 Detecting Neural Network
Our goal is a squishy one, get an essay from a student and find out how likely it is that GPT-3 wrote it. GPT-3 works by breaking up whatever prompt we give it into a list of tokens that it can feed into a transformer, so let’s take some inspiration from that and convert the essay into a list of tokens. Let’s then feed n sequential tokens into some hypothetical neural network, which spits out the probability that GPT-3 formed that series of tokens. This is a classification task and one that in my opinion is well covered by supervised learning. Now, there are two questions remaining (1) what is the structure of this neural network, and (2) where can we get training data. For (1) I don’t know but my guess is some flavor of long short-term memory residual neural network would do the trick. For (2) that’s the fun part. We need access to a database of “real” student essays, along with the essay prompts. We feed these prompts into GPT-3 to generate new “artificial” essays. We tokenize the “real” essays and pair each of these tokens with a 0. We then tokenize the “artificial” essays and pair each of these tokens with a 1. This gives us a training and testing database that is limited by the number of “real” student essays with prompts we can obtain as this will ensure the dataset is well balanced between “real” and “artificial” inputs. I am just glossing over a lot of fine implementation details, but now you have a neural network that you can feed essays into and obtain a probability of the essay being written by GPT-3 (you can obviously repeat this procedure with GPT-3 competitors). Now that you have a probability that the work is generated by GPT-3, how should that information be used by an instructor (as the probability will likely never be exactly 0% or 100%)? That I leave up to the instructor’s discretion.
Now you might say, this approach is terrible and carries with it all the same criticism of Turnitin. And to that, I would say “Yes, you’re correct, Turnitin (acquired in 2019 for $1.75 Billion) does have a lot of criticism.”*
*I realize that my writing lacks the inflection and tone that are present while speaking, so I’d recommend you read this twice. The first time dripping with sarcasm, and the second time seriously and dripping with greed. Once you’ve done that pick the version you find more amusing.**
**Unless you’re a venture capitalist with some extra money burn, then only read it seriously. I can be contacted here and will even throw in a blockchain reference about how we can mint their scores into NFT’s so that there is an open record showing that either they have a lot of essays that are plagiarized, or a lot of good essays. This even opens up the possibility of a secondary market where students can purchase tokens representing “low probability of plagiarism” to improve their reputation, while simultaneously providing a financial incentive for students writing novel essays. This would allow profits to be made both from the instructors checking for plagiarism and the students. As there are, on average, 25 students for every instructor, and Turnitin only serves instructors, this provides a rough valuation of ~$45 Billion.
Want More Gereshes?
If you want to receive new Gereshes blog post directly to your email when they come out, you can sign up for that here!
Don’t want another email? That’s ok, Gereshes also has a twitter account and subreddit!