Reminds me of a story I heard a few years back about a chatbot the beat the Turing test. They built the bot to act like a teenager that used a lot of slang and profanity whos first language wasn't english. Also since your last upload I got a job where I leverage an LLM API so imma need you to upload a lot more, ty 👏🙏
I've just come from the Nebula version of this video after about 2:00 to comment on the self-censorship. I understand that censoring is necessary for the algorithm here on UA-cam (and producing a second uncensored version for Nebula for such a minor thing might not be worth it), but the frequency used in this video was really quite jarring. I had to go back and repeat that section to make sure it was actually in the video and not something wrong with my PC/audio setup, and that I hadn't missed something.
Moved to a different filming location in my apartment since my last upload and there’s a lot more street noise, so I think the studio sound correction is having trouble with a few of the clips? I’ll have to play around with the audio setup.
AI text detectors do not work. Period. Full stop. This should come as no surprise to anyone who understands how LLMs work. Every model will have a different general writing style based on its overall training data, so you'd really need a different detection model specific to every text generation model in the wild, and you'd also have to keep your detection models up-to-date as those generation models are updated -- often without notice, as in the case of OpenAI. You would also need to account for varying parameters like temperature, presence/frequency penalty, nucleus sampling, style instructions, etc. that can dramatically change output. On top of that, models often have different writing patterns based on what they're talking about, or the previous context in the conversation. If you use a very uncommon word in an impactful way, you can often notice an immediate shift in tone and style. This doesn't even address the fact that people who interact with a specific LLM (like ChatGPT) a lot will unconsciously absorb that writing style, which can greatly increase the rate of false positives.
Back when ChatGPT was in the news, I remember hearing stories about a college student (Edward Tian?) making an “AI writing detector” tool. This tool itself probably used an LLM too, but I’m not sure. Not long after, I heard that it was possible to trick detector tools by taking the flagged output and asking the same LLM model to rewrite it.
The AI humanizers suck. The detectors suck. Quite often, my self written paragraphs are detected as AI. So, I no longer care. I try to produce the best content with the help of AI when necessary; that's the future!
I cannot imagine using AI to help me write anything. In part this is generational. I am over 60, so I grew up before the concept of AI was anything more than a gimmick in science fiction. Aside from that, I am highly educated (two M.A.s and a Ph.D.), so I am just way too smart and accomplished to need such "training wheels" to help me with writing. (Yes, call me snooty if you want.) I feel sorry for the human race as more and more of its members lose the ability to do basic thinking for themselves and become addicted to electronic assistance. AI has some useful applications in science, but it is shameful to see it being used for basic human functions.
I'm so thankful I finished college over a decade before these plagiarism machines existed, so I never had to fight the uphill battle to prove my innocence.
"Dear hiring manager, Hi!" Definitely what a human would write 😄
Reminds me of a story I heard a few years back about a chatbot the beat the Turing test. They built the bot to act like a teenager that used a lot of slang and profanity whos first language wasn't english.
Also since your last upload I got a job where I leverage an LLM API so imma need you to upload a lot more, ty 👏🙏
Congratulations! And yes, there are more videos in the pipeline 😅
I've just come from the Nebula version of this video after about 2:00 to comment on the self-censorship. I understand that censoring is necessary for the algorithm here on UA-cam (and producing a second uncensored version for Nebula for such a minor thing might not be worth it), but the frequency used in this video was really quite jarring. I had to go back and repeat that section to make sure it was actually in the video and not something wrong with my PC/audio setup, and that I hadn't missed something.
not affecting the content, but rare that I notice, so must be fairly significant microphone/sound issues in the video (room echo comes and goes)
Moved to a different filming location in my apartment since my last upload and there’s a lot more street noise, so I think the studio sound correction is having trouble with a few of the clips? I’ll have to play around with the audio setup.
AI text detectors do not work. Period. Full stop.
This should come as no surprise to anyone who understands how LLMs work. Every model will have a different general writing style based on its overall training data, so you'd really need a different detection model specific to every text generation model in the wild, and you'd also have to keep your detection models up-to-date as those generation models are updated -- often without notice, as in the case of OpenAI. You would also need to account for varying parameters like temperature, presence/frequency penalty, nucleus sampling, style instructions, etc. that can dramatically change output.
On top of that, models often have different writing patterns based on what they're talking about, or the previous context in the conversation. If you use a very uncommon word in an impactful way, you can often notice an immediate shift in tone and style.
This doesn't even address the fact that people who interact with a specific LLM (like ChatGPT) a lot will unconsciously absorb that writing style, which can greatly increase the rate of false positives.
very interesting video! small criticism is that the audio was not great for some of the clips
"I hope this comment finds you well, let's delve into the arguments presented..."
Back when ChatGPT was in the news, I remember hearing stories about a college student (Edward Tian?) making an “AI writing detector” tool. This tool itself probably used an LLM too, but I’m not sure.
Not long after, I heard that it was possible to trick detector tools by taking the flagged output and asking the same LLM model to rewrite it.
The AI humanizers suck. The detectors suck. Quite often, my self written paragraphs are detected as AI. So, I no longer care. I try to produce the best content with the help of AI when necessary; that's the future!
Interesting, thank you!
I cannot imagine using AI to help me write anything. In part this is generational. I am over 60, so I grew up before the concept of AI was anything more than a gimmick in science fiction. Aside from that, I am highly educated (two M.A.s and a Ph.D.), so I am just way too smart and accomplished to need such "training wheels" to help me with writing. (Yes, call me snooty if you want.) I feel sorry for the human race as more and more of its members lose the ability to do basic thinking for themselves and become addicted to electronic assistance. AI has some useful applications in science, but it is shameful to see it being used for basic human functions.
I'm so thankful I finished college over a decade before these plagiarism machines existed, so I never had to fight the uphill battle to prove my innocence.
Did you stop posting videos for a while? I could not see any of your content for the past year
@@skanderbegvictor6487 there hasn't been a video for about 9 months
@@benjola2 I see, well I am happy to see new content now
Yep! Took some time to focus on PhD stuff (and also had an existential crisis about the content I was making lol) but now I'm back!
@@JordanHarrod Glad to hear it! Love your content.