I'm a doctoral student who's just started doing essay marking, and I can tell you, EVERYONE in the faculty is on-edge about AI. There's no one method of picking it out, it's basically what the professors in the video said, you go with your gut and look for things like a lack of direct quotations, lack of in-text citations, unnecessary wordiness and nonspecificity - AI text often uses broad descriptive terms rather than the subject-specific jargon (as in the "sophisticated mathematical techniques" example). The whole thing raises real questions about whether research essays, those types of essays where you basically summarise the existing literature, will even be a thing in the future - I foresee a lot more project-based tasks and oral presentations with spontaneous Q&A taking the place of essays in the future. But perhaps even more than plagiarism concerns, there's also a lot of concern about using AI for marking. As a casual tutor most of my salary comes from marking papers, if AI takes that responsibility over then okay, I have more time for research, awesome, but also my claimable hours are suddenly drastically reduced. Yes we'd still have to 'supervise' the AI and do quality assurance, but we'd be getting paid for 2 hours rather than 20. I foresee many more white-knuckled faculty meetings about how to integrate AI into the teaching and learning sphere to come...
Also, essays that you have to write as a test on-site will become more common. Instead of being assigned an essay where you know the topic and have to hand it in, you have to sit down for a timed test where you don't know what topic is going to be thrown at you. It's already part of many exams for many subjects already, and it'll just have to become more common.
I mark as a TA and you ignore the obvious - feed the essay prompt into an AI yourself and if an essay resembles what the AI spit out then you can subject the essay and the writer to additional scrutiny.
No, you just supervise 10x the number of students. Same number of hours, significantly more things to do. Increased productivity is not always the easiest to handle, so you'd likely also get a raise
Once AI can accurately mark work, we won't really have a need for teachers apart from supervision. Currently, half of my classes are taught via UA-cam tutorials and the other half are taught through websites like Education Perfect. While the teachers still assign homework and run assessments, as well as choose which tutorials, apart from that they are just supervising us. I have no doubt that AI could take the place of my teachers in a teaching capacity if given a course structure. The only problem would be actually keeping students on task. I see university as less and less valuable when I can get the same education from UA-cam, education websites, and AI, at almost no cost. I am a highschooler so it probably varies at university, but from a highschool perspective, teachers are becoming obsolete.
The problem with this "experiment", is that they know there's one essay written by ChatGPT, and one written by a student beforehand, so they just have to tell them apart. They wouldn't be able to do the same thing in a natural setting, where they have to grade papers from multiple students with no previous knowledge.
Yeah a better experiment would be them not knowing if there even is a one written by chat gpt and give multiple rounds of say 3 essays each and it might be 0/3 are ai or even 3/3
I use ChatGPT frequently, as a tool not a replacement. I like to get bullet points on what I should research for a particular assignment. And then I go find sources for those bullets and research the topic myself.
ChatGPT is also amazing at answering questions. How does X work? How do I do Y? What is the purpose of Z? Whenever I dont understand something in a class I will typically ask ChatGPT to explain it to me.
Honestly, I resent Chat GPT. Because of it, my college is making us write essays on paper during class and it sucks. I'd like it much more if we could just write patiently on a computer in the comfort of our homes but the paranoia of AI ruined that.
you must be an english major. There is no way my research papers can be written in person. Sucks to be u I guess but chat gpts been an enormous help to me. My grades have significantly improved
I found it interesting that the professor noted that ChatGPT often uses a formulaic structure, such as starting the first sentence with a direct readout of the prompt (2:43). This is something to be aware of when using ChatGPT and other language models.
tested the same prompt with a llama 3 finetuned for storywritting and it started in a way different fashion, some models have their default prose way more varied: "Alex Garland's Ex Machina (2015) is a psychological sci-fi thriller that delves into the world of artificial intelligence, surveillance, and the blurring of lines between human and machine. Through a surveillance studies lens, this film critique will examine how Ex Machina critiques the pervasive nature of surveillance in modern society, the ways in which it reinforces existing power structures, and the implications of creating autonomous beings that can monitor and manipulate human behavior."
As a scientific student, I am so similar to the engineering professor, where I can only understand the engineering ariticles ( As english my second language.)
That's a fascinating take. My classes contained very little information, I had to rely _heavily_ on UA-cam creators in order to fill in the missing gaps. All of my papers included information not taught in class.
I like to use it during the revision process. Sometimes it gives good ideas when you want to rephrase a few sentences and just cant find the right words. I also give it the assignment rubric and then ask it to grade my paper based on the requirements.
A lot of papers are like 30% chatgpt and 70% human. When students use ChatGPT to write a paper, that doesn’t mean that they literally only use ChatGPT to write that paper.
To be honest the 70/30 is a good split. In the real world you aren't graded on your ability to brain dump and regurgitate facts. You have access to the internet and reference material at all times - it's up to you to be able to find the relevant information and use it in the proper context. College should be teaching you to do that. As an IT professional do I know how to code and make my own apps and product solutions? Yes. Do I? Most times no - because why would I spent months recreating something someone else already has (and has done better than I could) when I could take their product AND documentation, customize it to my needs and then put it out into my business at the same or cheaper cost than something I would have had to develop and then eventually hire staff to support and continue development of from scratch. ChatGPT is just a tool like any other.
I think 30% is an acceptable amount. A lot of classes have a no tolerance policy for AI but really using it a little bit is just like using google or having your friend help you.
I am a TA. I can usually tell if a student submitted Ai generated work, but school policy requires concrete proof. So unless we can prove that it was copied from somewhere, we have to let it pass.
As a student, I feel like I have a pretty good grasp of it as well. I see a lot of people use it for discussion posts, in particular. That being said, yes, there's no concrete proof, and AI detectors are good but not great. I remember using an AI detector on an essay I researched myself, where I used a lot of direct quotations, and Zero marked it as 50% AI and highlighted a lot of the quotations.
I think the professors are caught in between students who just want to escape 'education' & get into their actual lives with as little debt as possible & administrations who just want to rake in as many student-loan bucks as possible. Neither student, admin, nor student loan financier are particularly concerned with 'learning'
What I do to bypass all AI detectors. First, use all AI tools to brainstorm ideas to make a solid outline (ChatGPT, Gemini, Claude, googleaistudio…). Then I ask each of these tools to write each part of the essay. Read all of them then combine in a pdf. Now input that very same pdf in all those tools again. With a well- written prompt, now all those AI will point out whether or not your paper is likely written by AI and how to make it naturally written again. Now read through all versions to take out the best part from each.
At that rate just do an hour of research, cite some sources, write a paper. Idk what classes ur taking, but as an AP student I probably write about 3-5 in-class essays a week and it's a good skill to have. AI can only make an average paper as it's the average of everything we've put out on the internet.
Signed in just to like this video, y'all are putting out good content! I also really love how realistic the professors are about the use of ChatGPT, as they say its here to stay and students can learn a great deal from it
Although I use ChatGPT to do my assignments, I DON'T ask ChatGPT to do the overall assignment. I use ChatGPT for word ideas, if I don't know the one word for "very angry," I'll ask ChatGPT what good words that mean "very angry" that I can use for my assignments. Sometimes, I also use ChatGPT for revising ideas if my sentence don't sound good. But all the ideas for the assignment comes from me, not from ChatGPT. I've done this through high school for book analysis and graduated successfully.
@EvilThunderB0lt sometimes there are specific phrasing that are hard to find an alternative for via a thesaurus or a quick google search. ChatGPT is really helpful in those situations
The ignorance and presumptions from teachers about this technology will lead to so many students being mis-labelled as AI. The misconceptions that ChatGPT always makes spelling mistakes, is overly verbose, and that it can't speak in a natural tone is concerning - It's still an emerging technology, but educators should be better than this.
AI _might_ be good at general writing, but often with a certain sanitized result; it lacks emotion. However, with technical writing AI gets it wrong--gloriously wrong--yet plows on with with unearned confidence. I asked AI to explain the number of combinations possible on Rubik cube. It got the *right answer* (because it saw the correct number repeated 1000s of time in its training). But when it attempted to walk thru the math to support than answer, it was pathetically wrong and without any self-awareness (as one would expect) of its errors. As my friend pointed out, "it's a Large Language Model, not a *Math* Model". I've seen this pattern repeated dozens of times: well written reports, well-constructed sentences, full of factual nonsense.
All of this chatgpt writing papers is going to make people have to return to the oral tradition to pass literature & english composition classes. That or there's gotta be a lot more in person writing of papers.
As a student I can tell when my fellow students are using ChatGPT better than my professors can. Ive been in group projects where I know without asking that certain members in the project are using ChatGPT for their portion just by reading it and come prepared to defend myself and my part of the project against any allegations but the professor ends up totally oblivious
I love how whoever wrote these chat got prompts, clearly wasn’t trying. I’ve used chatgpt on a history essay once, I got 98% for it, simple take notes online on a big google doc, have gpt4 paste in your notes, paste in the marking criteria and the guidelines of what you want it to do, set the word count, and you have your essay, now read through it and change as much you can to not get picked up by ai detectors
It is and will always be a strange, somewhat archaic element of academia that students are required to arrange the English language in such a way for it to be both "unique" and "meeting the assignment requirements." AI writing bridges this gap. I don't like people cheating, but it is inarguable that submitting prompts to a word-generator will meet this requirement nine times out of ten.
Plagiarism is a no-no regardless of where the material came from. HOWEVER, using AI as a proof-reader is an excellent use of technology! Also, AI can be an excellent aide in research because it can access humungous amounts of literature and journals in blistering speed which makes identifying, and often accessing, the most useful references very quickly. AI's ability to summarise large amounts of text is again a useful tool, as well as its ability to locate the source of a known quote, to to explain professional vocabulary and/or methods and principles. Unlike a real-life tutor it never tires of re-explaining when one is trying hard to 'get something' that one's neurons are just not finding intuitive! ;-D
1:39 Ironically having grammatical errors is more of a sign of it being human than machine. ChatGPT is way less likely to have simple grammar or spelling mistakes.
But yo8d never just ctrl+c, ctrrl+v an entire answer to a prompt onto the assignment thats idiotic. I mean yeah you would at the start but then review it, tweak it, read it through test it egc. You just use gpt for the annoying raw material that you then tweak accordingly. Ofc you might not get everything but its not that hard to get the „AI“ out of the sentences with a few tweaks. Maybe even go for weaker expressions just try your best to turn it into something readable.
"I don't even think an incoming student would make that mistake" I beg to differ. I got bleeding duodenal ulcers during finals and lost 63% of the blood in my body. I then wrote a paper while suffering from that level of blood loss. Turns out that blood loss REALLY destroys your grammar. I skipped multiple grades and aced 400 level writing classes by the time I was 18. Blood loss made me type the sentence, "It's like you couldn't've or wouldn't've not done that if it was you, y'know?" and think it was appropriate to turn in on a paper.
I would've rather you given them like three essays and not give them any hints on the compostion of them... With this its a coinflip and doesn't say much
I mean most of them did, but more importantly AI detectors have gotten better now. If you use AI you pretty much have to reword it using your own words or else it will be flagged.
@@pawelpowWhen considering this monstrous being called ChatGPT, we need to remember one vitally important thing, which is that just because it can write assignments and other texts FOR you, it doesn't mean you UNDERSTAND subjects any better! Quite the contrary, actually. As with any technology, humanity uses, it makes us lazier and more confused. Using ChatGPT is like driving a car: With the latter, you don't exercise your body, and with the former you don't exercise your brain. AI can still NOT replace the knowledge, you gain from engaging in lively discussions with your fellow students, during study group work and in class with the professor. It will take decades before AI even comes close to replacing this valuable educational process, if it is even possible. A whole other thing is that technology in general has been alienating humans for centuries and with LLMs that trend is drastically accelerated. Read Hartmut Rosa's theories to understand more of this. They are based on many of the most important thinkers of the Enlightenment and (Post)modernity, including Charles Taylor, Karl Marx, Axel Honneth, Arthur Schopenhauer and many others.
@@pawelpowAI detectors are goofy and not accurate at all. Anyone that actually trusts a score given from an AI detector should not be grading essays.
if they couldnt, thatd be damn sad. chatgpt is terrible at writing. decent to get ideas started.. somewhat? not even that imo. good for learning basic code syntax, and good for producing latex files sometimes--other times it's just like damn, ill do it myself bro
How was the prompt generated? Did someone knowledgeable about how to interact with ChatGPT create it? Did anyone review the text to correct clear errors? Was the prompt optimized to get better results? Was the prompt divided into paragraphs for clarity? Simply stating "ChatGPT" vs. human cannot show full results if the prompts and responses weren't tested. For example, you can ask ChatGPT to include some grammar errors to trick a professor. Otherwise this Video has no validity in showing good results.
As a professor, are you okay with students who utilize AI as a tool for writing? Imagine a student prompting AI to devise a research plan or act as a grammatical advisor. How would you view that? Is that infringing upon dangerous territory when it comes to academic integrity? And do you think incorporating AI into your work as a student is harming or improving its overall quality? Also, I'm sorry to bombard you with questions! :') Please don't feel compelled to answer, I was just curious as to what an educator might think of how students, like myself, are employing AI in ways that aren't an absolute cop-out.
I talked to a lot of my high school teachers. They all knew who used Cheat-GPT, but they didn't have proof, and they couldn't force the student to prove it wasn't AI. It was infuriating to everyone, and I just couldn't understand why the district couldn't just give more power to the teachers on this.
I bet so many high school students are using AI now that there's basically not much to be done at this point. I wouldn't be surprised if a majority of kids were using it. Not saying all of them are copy-pasting, but there's definitely more investigation that needs to be done about how to approach it. Most likely, teachers and schools need to advocate that students use AI as another tool for research and building a framework, but not as the work itself.
@@ThahnG413 agreed. I want AI to be shown as a more sentient, but less reliable Wikipedia, but it's also up to the administrators to actually set and enforce rules to our law AI cheating, though I understand that this is hard to do.
@zigzag321go Well, in my opinion it's more important than ever that classes do their best to teach important subjects, explain why they are important to learn, and let the students get hands-on, because without these three things, students will use AI to no end. If they aren't learning anything in class and just letting the AI do the work for them, in the most extreme cases, it's pretty much the same as them not attending at all. Not all cases should be that bad, but it certainly could be.
The 2 professors who don't specialise in English: Yep I'm pretty confident I know which one is Chat GPT. [Gets it right]. 1 unfortunate "professor" of English: That one is Chat GPT! [Immediately starts backpedaling when it turns out he's wrong]. 3 is not a meaningful sample size, but this is a rather unfortunate data point in the current discussion climate around DEI hires.
I'm a doctoral student who's just started doing essay marking, and I can tell you, EVERYONE in the faculty is on-edge about AI. There's no one method of picking it out, it's basically what the professors in the video said, you go with your gut and look for things like a lack of direct quotations, lack of in-text citations, unnecessary wordiness and nonspecificity - AI text often uses broad descriptive terms rather than the subject-specific jargon (as in the "sophisticated mathematical techniques" example). The whole thing raises real questions about whether research essays, those types of essays where you basically summarise the existing literature, will even be a thing in the future - I foresee a lot more project-based tasks and oral presentations with spontaneous Q&A taking the place of essays in the future.
But perhaps even more than plagiarism concerns, there's also a lot of concern about using AI for marking. As a casual tutor most of my salary comes from marking papers, if AI takes that responsibility over then okay, I have more time for research, awesome, but also my claimable hours are suddenly drastically reduced. Yes we'd still have to 'supervise' the AI and do quality assurance, but we'd be getting paid for 2 hours rather than 20. I foresee many more white-knuckled faculty meetings about how to integrate AI into the teaching and learning sphere to come...
Also, essays that you have to write as a test on-site will become more common. Instead of being assigned an essay where you know the topic and have to hand it in, you have to sit down for a timed test where you don't know what topic is going to be thrown at you. It's already part of many exams for many subjects already, and it'll just have to become more common.
what is the goal for the test? to test if they can write without ai makes no sense.@@Parrot5884
I mark as a TA and you ignore the obvious - feed the essay prompt into an AI yourself and if an essay resembles what the AI spit out then you can subject the essay and the writer to additional scrutiny.
No, you just supervise 10x the number of students. Same number of hours, significantly more things to do. Increased productivity is not always the easiest to handle, so you'd likely also get a raise
Once AI can accurately mark work, we won't really have a need for teachers apart from supervision. Currently, half of my classes are taught via UA-cam tutorials and the other half are taught through websites like Education Perfect. While the teachers still assign homework and run assessments, as well as choose which tutorials, apart from that they are just supervising us. I have no doubt that AI could take the place of my teachers in a teaching capacity if given a course structure. The only problem would be actually keeping students on task. I see university as less and less valuable when I can get the same education from UA-cam, education websites, and AI, at almost no cost. I am a highschooler so it probably varies at university, but from a highschool perspective, teachers are becoming obsolete.
Every student should watch this before submitting something directly from chatGPT.
classic
The problem with this "experiment", is that they know there's one essay written by ChatGPT, and one written by a student beforehand, so they just have to tell them apart. They wouldn't be able to do the same thing in a natural setting, where they have to grade papers from multiple students with no previous knowledge.
Yeah a better experiment would be them not knowing if there even is a one written by chat gpt and give multiple rounds of say 3 essays each and it might be 0/3 are ai or even 3/3
Five essays with one being AI would be a better test imo
They need multiple lectures with some being given papers that have all been written by humans and some having papers that have all been written by AI.
@@ChaoticKrisis The funniest one would be if they gave essays that were all AI and asked them to guess which one is written by a human.
@@ChaoticKrisis they'd all fail lmao
Fun video and nice idea. Surprised at the low views, hopefully it blows up!
Rare I ever watch these sorts of videos with low view counts but the thumbnail looked good and this was amazing! Super well edited and fun 😁
I use ChatGPT frequently, as a tool not a replacement. I like to get bullet points on what I should research for a particular assignment. And then I go find sources for those bullets and research the topic myself.
Exactly!
I am doing it this way too. Am now writing my master thesis. When it is over I plan to continue using AI for something else.
ChatGPT is also amazing at answering questions. How does X work? How do I do Y? What is the purpose of Z? Whenever I dont understand something in a class I will typically ask ChatGPT to explain it to me.
Honestly, I resent Chat GPT. Because of it, my college is making us write essays on paper during class and it sucks. I'd like it much more if we could just write patiently on a computer in the comfort of our homes but the paranoia of AI ruined that.
agreed. since ai my essay wtiting only regressed because we didnt get to do essays. so the entire class was just yapp
i love chatgpt. It singlehandedly got me A's in multiple classes.
Ur college is the problem. No one else does that lol
you must be an english major. There is no way my research papers can be written in person. Sucks to be u I guess but chat gpts been an enormous help to me. My grades have significantly improved
@@primekrunkergamer188but you don’t understand anything. Good for your parents who pay for your worthless education. Nice job there😊
Chat love afar with Ava goes crazy
I found it interesting that the professor noted that ChatGPT often uses a formulaic structure, such as starting the first sentence with a direct readout of the prompt (2:43). This is something to be aware of when using ChatGPT and other language models.
is this an AI comment?
@streetBMX62 yeah lol
tested the same prompt with a llama 3 finetuned for storywritting and it started in a way different fashion, some models have their default prose way more varied: "Alex Garland's Ex Machina (2015) is a psychological sci-fi thriller that delves into the world of artificial intelligence, surveillance, and the blurring of lines between human and machine. Through a surveillance studies lens, this film critique will examine how Ex Machina critiques the pervasive nature of surveillance in modern society, the ways in which it reinforces existing power structures, and the implications of creating autonomous beings that can monitor and manipulate human behavior."
But the professor is wrong. Like not even close to being right
As a scientific student, I am so similar to the engineering professor, where I can only understand the engineering ariticles ( As english my second language.)
I am thoroughly entertained and informed
hah I get it
If you submit stuff directly from AI text you are being lazy², generate it, read it and always tweak it.
I made a joke SRP proposal with the intention of using the word delve as much as possible, I still tweaked it.
or compromise. Type it out yourself, simply using the AI as the information.
Chatgpt is good for probably an outline. Make sure not to put stuff that you weren't taught in class that would make it obvious.
That's a fascinating take. My classes contained very little information, I had to rely _heavily_ on UA-cam creators in order to fill in the missing gaps. All of my papers included information not taught in class.
what if you read articles about it outside of school?
When you get at a upper level you are expected to add content not taught in class
I like to use it during the revision process. Sometimes it gives good ideas when you want to rephrase a few sentences and just cant find the right words. I also give it the assignment rubric and then ask it to grade my paper based on the requirements.
As a student myself i try to make my papers look as if they are written by chatgpt, while it's not
I've graded papers before. Trust me when I say that you're not... unless you're trying to go off topic. lol
@@connorjohnmark strategies my man..... strategies
@@connorjohnmark yea but chat gpt doesnt go off topic if u tell it not to
@@primekrunkergamer188 You need to manually ensure that it knows what the topic is in the first place. Many students don't know the topic. lol
so happy this came up in my feed! need this video to blow up
A lot of papers are like 30% chatgpt and 70% human. When students use ChatGPT to write a paper, that doesn’t mean that they literally only use ChatGPT to write that paper.
yh ive done this for all my assignments this year and gotten A :)
To be honest the 70/30 is a good split. In the real world you aren't graded on your ability to brain dump and regurgitate facts. You have access to the internet and reference material at all times - it's up to you to be able to find the relevant information and use it in the proper context. College should be teaching you to do that. As an IT professional do I know how to code and make my own apps and product solutions? Yes. Do I? Most times no - because why would I spent months recreating something someone else already has (and has done better than I could) when I could take their product AND documentation, customize it to my needs and then put it out into my business at the same or cheaper cost than something I would have had to develop and then eventually hire staff to support and continue development of from scratch. ChatGPT is just a tool like any other.
i love chat gpt. It has helped me so much, helps me study for exams, do assignments, save time and focus on extracurriculars. CHATGPT IS THE BEST
I think 30% is an acceptable amount. A lot of classes have a no tolerance policy for AI but really using it a little bit is just like using google or having your friend help you.
I am a TA. I can usually tell if a student submitted Ai generated work, but school policy requires concrete proof. So unless we can prove that it was copied from somewhere, we have to let it pass.
As a student, I feel like I have a pretty good grasp of it as well. I see a lot of people use it for discussion posts, in particular. That being said, yes, there's no concrete proof, and AI detectors are good but not great. I remember using an AI detector on an essay I researched myself, where I used a lot of direct quotations, and Zero marked it as 50% AI and highlighted a lot of the quotations.
I think the professors are caught in between students who just want to escape 'education' & get into their actual lives with as little debt as possible & administrations who just want to rake in as many student-loan bucks as possible. Neither student, admin, nor student loan financier are particularly concerned with 'learning'
What I do to bypass all AI detectors. First, use all AI tools to brainstorm ideas to make a solid outline (ChatGPT, Gemini, Claude, googleaistudio…). Then I ask each of these tools to write each part of the essay. Read all of them then combine in a pdf. Now input that very same pdf in all those tools again. With a well- written prompt, now all those AI will point out whether or not your paper is likely written by AI and how to make it naturally written again. Now read through all versions to take out the best part from each.
At that rate just do an hour of research, cite some sources, write a paper. Idk what classes ur taking, but as an AP student I probably write about 3-5 in-class essays a week and it's a good skill to have. AI can only make an average paper as it's the average of everything we've put out on the internet.
Signed in just to like this video, y'all are putting out good content!
I also really love how realistic the professors are about the use of ChatGPT, as they say its here to stay and students can learn a great deal from it
Although I use ChatGPT to do my assignments, I DON'T ask ChatGPT to do the overall assignment. I use ChatGPT for word ideas, if I don't know the one word for "very angry," I'll ask ChatGPT what good words that mean "very angry" that I can use for my assignments. Sometimes, I also use ChatGPT for revising ideas if my sentence don't sound good. But all the ideas for the assignment comes from me, not from ChatGPT. I've done this through high school for book analysis and graduated successfully.
A thesaurus is really really cheap nowadays.
@@EvilThunderB0ltyeaaa maybe using chatGPT for a simple synonym is a bit excessive LOL
@EvilThunderB0lt sometimes there are specific phrasing that are hard to find an alternative for via a thesaurus or a quick google search. ChatGPT is really helpful in those situations
The ignorance and presumptions from teachers about this technology will lead to so many students being mis-labelled as AI. The misconceptions that ChatGPT always makes spelling mistakes, is overly verbose, and that it can't speak in a natural tone is concerning - It's still an emerging technology, but educators should be better than this.
AI _might_ be good at general writing, but often with a certain sanitized result; it lacks emotion. However, with technical writing AI gets it wrong--gloriously wrong--yet plows on with with unearned confidence. I asked AI to explain the number of combinations possible on Rubik cube. It got the *right answer* (because it saw the correct number repeated 1000s of time in its training). But when it attempted to walk thru the math to support than answer, it was pathetically wrong and without any self-awareness (as one would expect) of its errors. As my friend pointed out, "it's a Large Language Model, not a *Math* Model". I've seen this pattern repeated dozens of times: well written reports, well-constructed sentences, full of factual nonsense.
Similar to what happens when you ask it to write code
All of this chatgpt writing papers is going to make people have to return to the oral tradition to pass literature & english composition classes.
That or there's gotta be a lot more in person writing of papers.
or just a phasing out of lengthy papers as an academic concept. Things like reports will be replaced with presentations with live questions.
As a student I can tell when my fellow students are using ChatGPT better than my professors can. Ive been in group projects where I know without asking that certain members in the project are using ChatGPT for their portion just by reading it and come prepared to defend myself and my part of the project against any allegations but the professor ends up totally oblivious
I LOVE CHATGPT, LOVE IT!!!!!!!
I love how whoever wrote these chat got prompts, clearly wasn’t trying. I’ve used chatgpt on a history essay once, I got 98% for it, simple take notes online on a big google doc, have gpt4 paste in your notes, paste in the marking criteria and the guidelines of what you want it to do, set the word count, and you have your essay, now read through it and change as much you can to not get picked up by ai detectors
Not really something to brag about 😬
@@cheeseball5030there is no shame in cheating dude whatever gets you ahead
@@cheeseball5030 Cope
@@cheeseball5030What's wrong about bragging about making an excellent essay?
@@mujtabaalam5907 "making"
This was an awesome video 😊 Keep up the great work!!!
I use it all the time but even just for school
Future is here grandpa deal with it
It is and will always be a strange, somewhat archaic element of academia that students are required to arrange the English language in such a way for it to be both "unique" and "meeting the assignment requirements." AI writing bridges this gap. I don't like people cheating, but it is inarguable that submitting prompts to a word-generator will meet this requirement nine times out of ten.
Plagiarism is a no-no regardless of where the material came from. HOWEVER, using AI as a proof-reader is an excellent use of technology! Also, AI can be an excellent aide in research because it can access humungous amounts of literature and journals in blistering speed which makes identifying, and often accessing, the most useful references very quickly. AI's ability to summarise large amounts of text is again a useful tool, as well as its ability to locate the source of a known quote, to to explain professional vocabulary and/or methods and principles. Unlike a real-life tutor it never tires of re-explaining when one is trying hard to 'get something' that one's neurons are just not finding intuitive! ;-D
1:39 Ironically having grammatical errors is more of a sign of it being human than machine. ChatGPT is way less likely to have simple grammar or spelling mistakes.
Wow this video's sick😎
clown grammar
@@FemboyEngineervideo’s is “video is.” Nothing wrong there
@@FemboyEngineerwheres ur capital letter and full stop mr genius
The engineering professor seemed like a good professor tbh.
Do it again using o1
But yo8d never just ctrl+c, ctrrl+v an entire answer to a prompt onto the assignment thats idiotic. I mean yeah you would at the start but then review it, tweak it, read it through test it egc. You just use gpt for the annoying raw material that you then tweak accordingly. Ofc you might not get everything but its not that hard to get the „AI“ out of the sentences with a few tweaks. Maybe even go for weaker expressions just try your best to turn it into something readable.
4:00 LMFAO
"I don't even think an incoming student would make that mistake" I beg to differ. I got bleeding duodenal ulcers during finals and lost 63% of the blood in my body. I then wrote a paper while suffering from that level of blood loss. Turns out that blood loss REALLY destroys your grammar. I skipped multiple grades and aced 400 level writing classes by the time I was 18. Blood loss made me type the sentence, "It's like you couldn't've or wouldn't've not done that if it was you, y'know?" and think it was appropriate to turn in on a paper.
"Losing 40% or more of your blood volume will usually lead to death without immediate and aggressive life-saving measures."
this is why you use chat gpt as a rough draft
I would've rather you given them like three essays and not give them any hints on the compostion of them...
With this its a coinflip and doesn't say much
You can ask ChatGPT to make your essay sound like a high school or freshmen college student wrote it.
tldr... no they can't 😂
I mean most of them did, but more importantly AI detectors have gotten better now. If you use AI you pretty much have to reword it using your own words or else it will be flagged.
@@pawelpowWhen considering this monstrous being called ChatGPT, we need to remember one vitally important thing, which is that just because it can write assignments and other texts FOR you, it doesn't mean you UNDERSTAND subjects any better! Quite the contrary, actually. As with any technology, humanity uses, it makes us lazier and more confused. Using ChatGPT is like driving a car: With the latter, you don't exercise your body, and with the former you don't exercise your brain.
AI can still NOT replace the knowledge, you gain from engaging in lively discussions with your fellow students, during study group work and in class with the professor. It will take decades before AI even comes close to replacing this valuable educational process, if it is even possible.
A whole other thing is that technology in general has been alienating humans for centuries and with LLMs that trend is drastically accelerated. Read Hartmut Rosa's theories to understand more of this. They are based on many of the most important thinkers of the Enlightenment and (Post)modernity, including Charles Taylor, Karl Marx, Axel Honneth, Arthur Schopenhauer and many others.
@@pawelpowAI detectors are goofy and not accurate at all. Anyone that actually trusts a score given from an AI detector should not be grading essays.
7:02 😭😭😭 I need him to be my teacher
study advanced math, engineering, physics and get on his undergrad mechanical engineering course.
would've been better if ALL the papers were AI and you didn't tell them until the very end
The lack of grammatical attention to speech coming from professors is what was much more disconcerting for me.
They all failed massively. Humans are repetitive, AI isn't. Do they not speak to people?
Very cool and funny video. WashU is so cool
he called it ex WHAT
THAT’S WHAT I’M THINKING
Do professors care any more? Do they need the extra work to check? For what? For peanuts? And evaluation and autoevaluation nonsence over them?
lol you are doomed in life if you don't see an issue with copy and pasting from chatgtp
@@Duffyyy94no actual reason just screaming at the clouds lol
Business students are the furthest from "American's finest"
if they couldnt, thatd be damn sad. chatgpt is terrible at writing. decent to get ideas started.. somewhat? not even that imo. good for learning basic code syntax, and good for producing latex files sometimes--other times it's just like damn, ill do it myself bro
haha love this video!
How was the prompt generated? Did someone knowledgeable about how to interact with ChatGPT create it? Did anyone review the text to correct clear errors? Was the prompt optimized to get better results? Was the prompt divided into paragraphs for clarity?
Simply stating "ChatGPT" vs. human cannot show full results if the prompts and responses weren't tested. For example, you can ask ChatGPT to include some grammar errors to trick a professor.
Otherwise this Video has no validity in showing good results.
Anyone notice which professor didn't get it correct. The so called English teacher lol!
To be honest, they pick up if the students stroke the ego of the professors by using the jargon they teach in their class.
I wouldn't say that appropriately using relevant technical terms is "stroking the ego" of the instructor.
I'm a professor of psychology and I can tell you it is very easy to discern the Chat GPT papers from the true student papers.
As a professor, are you okay with students who utilize AI as a tool for writing? Imagine a student prompting AI to devise a research plan or act as a grammatical advisor. How would you view that? Is that infringing upon dangerous territory when it comes to academic integrity? And do you think incorporating AI into your work as a student is harming or improving its overall quality? Also, I'm sorry to bombard you with questions! :') Please don't feel compelled to answer, I was just curious as to what an educator might think of how students, like myself, are employing AI in ways that aren't an absolute cop-out.
I talked to a lot of my high school teachers. They all knew who used Cheat-GPT, but they didn't have proof, and they couldn't force the student to prove it wasn't AI. It was infuriating to everyone, and I just couldn't understand why the district couldn't just give more power to the teachers on this.
I bet so many high school students are using AI now that there's basically not much to be done at this point. I wouldn't be surprised if a majority of kids were using it. Not saying all of them are copy-pasting, but there's definitely more investigation that needs to be done about how to approach it. Most likely, teachers and schools need to advocate that students use AI as another tool for research and building a framework, but not as the work itself.
@@ThahnG413 agreed. I want AI to be shown as a more sentient, but less reliable Wikipedia, but it's also up to the administrators to actually set and enforce rules to our law AI cheating, though I understand that this is hard to do.
@zigzag321go Well, in my opinion it's more important than ever that classes do their best to teach important subjects, explain why they are important to learn, and let the students get hands-on, because without these three things, students will use AI to no end. If they aren't learning anything in class and just letting the AI do the work for them, in the most extreme cases, it's pretty much the same as them not attending at all. Not all cases should be that bad, but it certainly could be.
normal people can tell the difference between ai and human writing though ehhhh
Typical - some wrong , some correct.
In either case a woke NPC is writing it.
The 2 professors who don't specialise in English: Yep I'm pretty confident I know which one is Chat GPT. [Gets it right].
1 unfortunate "professor" of English: That one is Chat GPT! [Immediately starts backpedaling when it turns out he's wrong].
3 is not a meaningful sample size, but this is a rather unfortunate data point in the current discussion climate around DEI hires.