Those shitbot LLMs don’t produce comparable work to humans, imo, but there are methods the instructors can use to mess with them so if a student does submit AI papers it’s more a skill issue for the instructor.
I hope everyone who is proven to submit such papers is immediately kicked out of school with no refund. And I hope any federal aid they received is taken back forcing them to pay it.
EDIT: Looks like I struck a nerve because this comment’s score fluctuates from positive to negative. Those shitbot enthusiasts must be in here shilling.
I agree with the sentiment except for the last bit, which is a bit harsh. LLMs didn’t exist when I was in uni, but academic dishonesty did happen from time-to-time. It’d usually result in that assignment getting a 0 (sometimes enough to fail the whole module), although repeated offences beget more drastic consequences. LLM use is just a new form of academic dishonesty IMHO, albeit one that is more difficult to detect definitively.
There are a lot of idiots in university who do dumb things but then learn from the experience. Being too punitive would likely be a net negative, especially when talking about pulling their funding…
Really the broader point is nobody should have to take out life-ruining loans for a chance at education in the first place.
In the USA, “contract cheating” or outsourcing work, now and previously, was punishable by expulsion.
Outsourcing work is bad enough, but if students learn to do academics by relying on lying chatbots then eventually they’re going to dump that slop into the open world and it will be a net negative for the entire human race. It’s a whole new type of harm they’re committing incomparable to anything in the past.
Based on that description of an essay, I don’t think you’ve been to college.
Writing an essay isn’t supposed to be a test of what the student has learned; it’s a test of what the student has found out.
Essay writing is (theoretically) an exercise in research. Maybe you’ll do primary research via the scientific method, conducting and reporting on experiments, but vastly more likely you’re supposed to find essays written by other researchers and cite their work, drawing conclusions based on their data.
Doing this properly requires critical thinking and logic skills…which they don’t bother to teach. Instead, they focus mainly on the clerical aspects of the assignment; focusing more on font selection and document formatting than the actual content of the ideas.
What is a take-home open-book test other than busywork? I’m a flight instructor, I teach people how to fly airplanes. I don’t care how much or how little time a student studies, they’ve got to show me they can fly the airplane before I send them to the examiner. College professors seem to approach their job as “How can I make this hard on my students?” I’ve always seen my job as “How can I make this easier for my students?” Because my job is to teach a skill, not gatekeep a decent living.
With respect, it sounds like you have no idea about the range of nonsense human students are capable of submitting even without AI.
I used to teach Software Dev at a university, and even at MSc level some of the submissions would have paled in comparison to even GPT3 output. That said, I didn’t have to deal with the AI problem myself. I taught just before LLMs came into their own - Textsynth had just come out, and I used to use it as an example of how unintentional bias in training data shapes the outputs.
While I no longer teach, I do still work in that space. Ironically the best way to catch AI papers these days is with another AI. This is included in the plagiarism-checking software, and breaks down where it detects suspicious passages and why it thinks they’re suspicious.
With respect, it sounds like you have no idea about the range of nonsense human students are capable of submitting even without AI.
Human students, and non-students, were the training data set. The LLMs will never reach 94% accuracy to that even with infinite resources. The AI is always always always always going to be worse.
Those shitbot LLMs don’t produce comparable work to humans, imo, but there are methods the instructors can use to mess with them so if a student does submit AI papers it’s more a skill issue for the instructor.
I hope everyone who is proven to submit such papers is immediately kicked out of school with no refund. And I hope any federal aid they received is taken back forcing them to pay it.
EDIT: Looks like I struck a nerve because this comment’s score fluctuates from positive to negative. Those shitbot enthusiasts must be in here shilling.
I agree with the sentiment except for the last bit, which is a bit harsh. LLMs didn’t exist when I was in uni, but academic dishonesty did happen from time-to-time. It’d usually result in that assignment getting a 0 (sometimes enough to fail the whole module), although repeated offences beget more drastic consequences. LLM use is just a new form of academic dishonesty IMHO, albeit one that is more difficult to detect definitively.
There are a lot of idiots in university who do dumb things but then learn from the experience. Being too punitive would likely be a net negative, especially when talking about pulling their funding…
Really the broader point is nobody should have to take out life-ruining loans for a chance at education in the first place.
In the USA, “contract cheating” or outsourcing work, now and previously, was punishable by expulsion.
Outsourcing work is bad enough, but if students learn to do academics by relying on lying chatbots then eventually they’re going to dump that slop into the open world and it will be a net negative for the entire human race. It’s a whole new type of harm they’re committing incomparable to anything in the past.
If LLMs are going to improve society in any way, it’ll be killing the delusion that writing MLA formatted essays has anything to do with education.
Woe is you, being asked to prove you learned anything, open book and at a time and place you decide on. Skill issue.
Based on that description of an essay, I don’t think you’ve been to college.
Writing an essay isn’t supposed to be a test of what the student has learned; it’s a test of what the student has found out.
Essay writing is (theoretically) an exercise in research. Maybe you’ll do primary research via the scientific method, conducting and reporting on experiments, but vastly more likely you’re supposed to find essays written by other researchers and cite their work, drawing conclusions based on their data.
Doing this properly requires critical thinking and logic skills…which they don’t bother to teach. Instead, they focus mainly on the clerical aspects of the assignment; focusing more on font selection and document formatting than the actual content of the ideas.
What is a take-home open-book test other than busywork? I’m a flight instructor, I teach people how to fly airplanes. I don’t care how much or how little time a student studies, they’ve got to show me they can fly the airplane before I send them to the examiner. College professors seem to approach their job as “How can I make this hard on my students?” I’ve always seen my job as “How can I make this easier for my students?” Because my job is to teach a skill, not gatekeep a decent living.
With respect, it sounds like you have no idea about the range of nonsense human students are capable of submitting even without AI.
I used to teach Software Dev at a university, and even at MSc level some of the submissions would have paled in comparison to even GPT3 output. That said, I didn’t have to deal with the AI problem myself. I taught just before LLMs came into their own - Textsynth had just come out, and I used to use it as an example of how unintentional bias in training data shapes the outputs.
While I no longer teach, I do still work in that space. Ironically the best way to catch AI papers these days is with another AI. This is included in the plagiarism-checking software, and breaks down where it detects suspicious passages and why it thinks they’re suspicious.
Human students, and non-students, were the training data set. The LLMs will never reach 94% accuracy to that even with infinite resources. The AI is always always always always going to be worse.