AI is DESTROYING history

Поділитися
Вставка
  • Опубліковано 6 січ 2025

КОМЕНТАРІ • 69

  • @stuBHV
    @stuBHV 4 місяці тому +29

    I've had Gemini and ChatGPT outright deny specific historical events I was asking about. No clue why. I'm not a historian, but I feel like if I were, I would not trust AI with anything I was trying to do.

    • @kitefan1
      @kitefan1 4 місяці тому

      Isn't AI "programmed" using the internet? Much of the content available is post 1990 or so. There is not as much history as current stuff and there is less non-commercial info that there was at the beginning. There is also the multi-name issue. In the US we have "The Civil War", The War of Northern Aggression", "War of the Rebellion", "Great Rebellion", and the "War for Southern Independence", They are all the same war that lasted about 4 years. There also other terms such as Antebellum for before the war. I wouldn't trust Wikipedia for anything serious except as a starting point, or for an overview. Yes, in general the consensus is correct. When I was a child the cartoon character Popeye ate spinach for strength because spinach was the vegetable highest in iron. This idea was a tenfold over estimate based on a decimal point error became a commonly known "fact" for 70years.

    • @stuBHV
      @stuBHV 4 місяці тому

      @@kitefan1 The short answer is, no, it's not 'programmed' by the internet. I cannot post links in comments, but for reference, Google "AI hallucinations." Generative AI fabricates academic references. Or look up 'Dr. OpenAI Lied to Me" by Jeremy Faust, MD, where he details his strange experiences. There are some very significant "gremlins" in AI.
      An attitude of caution towards Wikipedia is absolutely correct. The AI does not know everything and cannot always sift through all the data; but also, programmers have imposed boundaries on how AI can respond. This has all kinds of butterfly-effects.
      It's one thing for the AI to reply "I can't find an answer" and a very different thing for the AI to create a completely false reply.

  • @Rhombohedral
    @Rhombohedral 4 місяці тому +34

    I block every channel suggested by youtube showing cheap AI thumbnails

    • @Realhuman-w8m
      @Realhuman-w8m 4 місяці тому +5

      Especially if it's an art related channel or a BIG channel, the last ones have the money to pay someone, Come on!

    • @Rhombohedral
      @Rhombohedral 4 місяці тому +2

      @@Realhuman-w8m and as this video was about, there are historical drawings paintings etc... and then ye get that AI trash instead

    • @francisco444
      @francisco444 4 місяці тому +1

      Such hate... It's sad to have people come so strong and discriminate on something so harmless 😢

    • @Rhombohedral
      @Rhombohedral 4 місяці тому +5

      @@francisco444 WUT? I jsut dont want to watch trash content spoken by an AI bot
      get a life

    • @acacaczawoodle
      @acacaczawoodle 4 місяці тому +2

      Same

  • @cocoquake
    @cocoquake 4 місяці тому +7

    That one dislike from was from angry ChatGPT

  • @TealCheetah
    @TealCheetah 4 місяці тому +12

    In a craft group, a person didn't understand that chatgpt probably didn't give her an accurate candle making recipe. face palm.

  • @kry9342
    @kry9342 4 місяці тому +12

    I really enjoyed your commentary on this, thank you for the great content!

  • @kerriemckinstry-jett8625
    @kerriemckinstry-jett8625 4 місяці тому +12

    People aren't very discerning when it comes to any form of medium. Some years ago, one of my students (in college) thought the movie "The Martian", the one based on the work of *fiction* by Andy Weir, was a real historic event.
    AI has a lot of amazing uses, like projects where artifacts are too fragile for conventional research methods... so cool. Or projects where there's just so much data that they're searching for anomalies in the graphs of millions of separate objects... so cool. Its abuses are pretty bad, though.
    That being said, if someone did train a historical figure chatbot on all the known writings and reputable research on the figure, it might be a cool experience to "chat" with it. As it is, they're trained on extremely limited & not always reliable resources... No.
    Edit: slight typo fix

  • @Blackdiamondprod.
    @Blackdiamondprod. 4 місяці тому +2

    If you think AI is destroying history, just wait until you hear about state funded public schools.

  • @Pazliacci
    @Pazliacci 4 місяці тому +12

    like an experience I had that was tangential to my hatred of the use of "AI" in the field of history was when I got an advert for CHRISTIANITY with an AI generated image of the Sermon on the Mount, YOU HAD THE ENTERITY OF ART HISTORY, from Mosaics, Fresco, Renaissance, Rococo, Baroque, Muralism, Modern- bloody CHILDREN'S BOOK ILLUSTRATORS
    yet you had to generated a weird AI image where Jesus hand is detatched from his wrist, and the people in the background are blury messes, with heads and limbs fused together, and entierly devoid of any religious or spiritual iconography, instead you now just have a pulpy Jesus making a hand gesture to indicate that he is speaking, but not on what, or who he is-
    and I am a pagan infidel 😭
    ultimately thou all of the above are not flawless or without their historical criticisms, like one thing I often see is AI art of Charlemagne based off the 15th century portrait, which like any historian would be able to tell you that is an early-modern depiction of an early-medieval ruler, who in fact almost certainly looked nothing like that. But one is venerated and holds much more nationalistic power and legitimizing iconography. which ultimately also I see in a lot of weird crypto-fascist AI nerd circles, like AI gives them the power to create hyper masculine, white, normative visual interpretations of the past.

  • @LedgerAndLace
    @LedgerAndLace 4 місяці тому +8

    I, too, prefer Queen Elizabeth to be less vampire-y! I feel about AI like I do artificial sweeteners: it's ARTIFICIAL. I get a visceral response when I look at AI-generated "nature" images or animals. Rick Beato did a video about AI--generated music. Nick Cave responded to a question about an AI-generated "Nick Cave" song that was very thoughtful. AI has infiltrated so many aspects of creativity. I'm concerned that younger generations growing up with this won't be able to discern what's real and what's not--especially when it comes to history and facts.

  • @darnedgosh2274
    @darnedgosh2274 4 місяці тому +1

    So glad you were recommended! Great video- great take. Subbed for more.

  • @paulhiggins5165
    @paulhiggins5165 4 місяці тому +8

    I saw a guy using AI to 'enhance' some photographs of the amercian civil war period- which in reality involved the AI inventing detail to increase the apparent fidelity of the images. It seemed to entirely escape him that the end result was not a clearer view of an actual historical scene but a fictionlised image that was based on the original photograph but now contained entirely made up stuff.
    So what happens when these psudeo historical images begin to circulate online? At some point even knowing if the image you are seeing is actual history or an AI generated fake will become problematic.
    I think the problem arises from the fact that we been have trained to see computers as a source of accurate and precise information- but generative AI does not work this way because it defines a 'correct' response as one that is plausible and apposite- but has zero concern as to the accuracy or truth of that response. Thus if asked a question to which it does not have a ready answer from it's training data it will simply manufacture an answer that seems to make sense but in reality may have no factual basis.

  • @memegazer
    @memegazer 4 місяці тому +1

    As long as the text is legible a cell phone camara is all you need to digitize it.

  • @Ria-mf1eu
    @Ria-mf1eu 4 місяці тому +1

    We have a saying in statistics: garbage in = garbage out. There is a place for AI in research (in most fields) when closely supervised and trained by humans to do things that are practically beyond the scope of human abilities (the burned scroll is a great example). But I'm extremely skeptical of anything that purports to be general purpose.
    Also: thank you so much for your content, you have prompted me to engage much more critically and actively when I visit a museum.

  • @Blackdiamondprod.
    @Blackdiamondprod. 4 місяці тому

    14:50 why?

  • @galaxisinfernalis
    @galaxisinfernalis 4 місяці тому +3

    Great video!😺

  • @Charlie-Em
    @Charlie-Em 4 місяці тому +1

    Damn professor, I need to sign up for your class😍

  • @tritonjay9871
    @tritonjay9871 4 місяці тому

    Being able to pump out decent quality copyright-free images can be very tempting. We shouldn't underestimate that. Right now, people are kinda paranoid about the correct legal usage of the elements they can use in their videos. If you need, say, a high-resolution picture of a Greek ruin it's much easier to just have some AI image generator make one than digging around for what you need and much cheaper than paying stock media brokers like Adobe. Thank god for Pixabay and Wikimedia Commons.

  • @Mecharnie_Dobbs
    @Mecharnie_Dobbs 4 місяці тому +2

    12:57 I think most people would lie about that. Including Osama bin Laden.

  • @ioanaburlacu3069
    @ioanaburlacu3069 4 місяці тому +1

    It is a shame AI is regarded as the end all be all, especially in recreational use and academics. While it is useful to an extent, it destroys the human experience and drive to create, preserve, and engage in the beloved artifacts humanity has left for centuries. Excellent video!

  • @Sunnydionysus
    @Sunnydionysus 4 місяці тому +2

    Fun video!

  • @bethliebman8169
    @bethliebman8169 4 місяці тому +4

    I've been saying AI is not ready for prime time. My main interface with AI is on BING search engine--it can do rather well answering questions. I was reading Robert Penn Warren's All The Kings Men. There was vocabulary I was unfamiliar with. It was able to give me the information I needed when I put in the book and page number. I felt this was most likely accurate. However, most other uses of AI are hilarious( like Bernadette Banner asking AI to create figures wearing historical garb, when they were very distorted with extra fingers) for dangerous (Self driving vhicles

    • @bethliebman8169
      @bethliebman8169 4 місяці тому

      We need always be skeptical and alert, not just with AI, but with life in general. Always question!

    • @francisco444
      @francisco444 4 місяці тому

      If Tyler Cowen uses AI for research, I would too

  • @BlackHattie
    @BlackHattie 4 місяці тому

    The water issue. The freon and anvil to anvil issue, the separation of taugths and systems. Good to be dead...

  • @thespacecowboy420
    @thespacecowboy420 4 місяці тому +4

    There is no such thing as AI.

    • @carultch
      @carultch 4 місяці тому +4

      AI doesn't exist, but all it takes is a slick salesman who can convince your boss it does, and it will ruin everything anyway.

  • @etunimenisukunimeni1302
    @etunimenisukunimeni1302 4 місяці тому +1

    I'm a hopeful AI nerd, so my opinions on this are obviously biased. I find it incredibly insightful and helpful to see what people more interested in other things are saying about AI. This video did _not_ disappoint on that front, thanks! You have so many good points I find it difficult to write a concise comment.
    The apps you talk about in the video indeed seem like they just don't need to exist right now. There is no other purpose but to try to tap into the massive moneystream flowing all around AI right now. Hopeful as I am, I'd like to think that some of the hypothesised use cases and benefits could materialise some day, but it's not going happen tomorrow. Needs massive advancements on both data digitisation (reducing the woefully underfunded human labour) and data analysis in order to work, but I see that as a possibility. Feel free to disagree, it's just my gut feeling.
    I really loved your remark on how carbon copying recently deceased people can be worse than useless, and actually cause harm and grief beyond being just misinformative. While I think it's probably not a huge problem, it's a problem that doesn't come to mind if you're just effing around with these imagined characters.
    That said, I personally find LLMs extremely useful when starting to learn about a new subject. They have approximate knowledge about almost everything, and my attitude toward that information is like I've heard it from a friend who sounds like they know what they're talking about. Just like with casual chats between friends, you can learn something, but have to be wary about what you hear. Often it's correct, but you only really learn things by actually doing and reading yourself!
    Thanks again for the video, it was really interesting!

  • @matheussanthiago9685
    @matheussanthiago9685 4 місяці тому

    At this point WHAT isn't AI ruining?

  • @chocolatecookie8571
    @chocolatecookie8571 4 місяці тому

    Artificial Insanity.

  • @BeenanPeenan
    @BeenanPeenan 4 місяці тому +3

    Just a little algorithm comment

  • @realAustenFreeze
    @realAustenFreeze 4 місяці тому

    Loosen up just a smidge but I’m in nice to meet you I’m rootin’ for ya girl

  • @shatteredprism
    @shatteredprism 4 місяці тому +1

    I like this video. /g

  • @pollywops9242
    @pollywops9242 4 місяці тому

    Destruction is required as to not stagnate/-- revert

    • @lauryburr7044
      @lauryburr7044 4 місяці тому

      Aha, a fan of Nietzsche perhaps! Artificial Intelligence in the role of Nietzsche's (or Diogenenes') madman, telling us about the death of God? But here's the problem - Nietzsche, via his madman and also directly - was telling us that things have changed, so we need to change. Ai by contrast is basically reflecting/averaging/regurgitating what we've already said, thereby actually accelerating the process of stultification under the guise of being bright, shiny and new! Welcome to AI, the new opium of the masses, replacing religion and marxism!!

    • @lauryburr7044
      @lauryburr7044 4 місяці тому

      Aha, do I detect another Nietzsche fan? Artificial Intelligence as the Mark III madman (Nietzsche's was not the original, but Mark II - the original was that of Diogenes, well over 2000 years previously. But it's not really a good analogy I'm drawing - because Nietzsche's madman, proclaiming the "death of God", was saying in essence "Hey, guys, wake up - the world has changed and so we need to change!. AI, by contrast - at least the AI that we talk to, ask to write essays for us (no, I don't, and I advise others not to - it makes mistakes) and so on is merely regurgitating a sort of "average" of what's already been written on the internet. It's actually stultifying - it's a rear-view mirror (as said by Shannon Vallor in "The AI Mirror). So it's not really like Nietzsche at all - it's more like Pangloss in Voltaire's Candide, claiming we live in "the best of all possible worlds". It's as though AI is in danger of becoming the Mark III opium of the masses, replacing religion and marxism!

  • @NirvanaFan5000
    @NirvanaFan5000 4 місяці тому +1

    I really enjoyed this video. I wanted to offer a bit of a counter-perspective. Mostly, that I think your complaints are generally valid but probably limited to the here and now.
    1. Helping historians: I can imagine AI robots that can carefully handle and scan historical books and artifacts, greatly increasing the pace at which these items are digitized. Once digitized, they can be available on the web. In other words, this will *reduce* how often historians need to go to the library or other host centers. In addition to scanning the material, AI will be able to analyze and categorize materials. That will be extremely helpful. Imagine asking an AI to scan an entire collection for all references to a concept (not just specific words), or to details in artifact design. That could increase the pace and ease of research tremendously. So while these articles haven't really spelled out how this might happen, or *when* this might happen, I think there's clearly a role for AI to help with actual research.
    2. Poorly executed AI historical personalities:
    a. bad pics - agreed. but easily fixable
    b. AI trained on stolen data - I agree this is an issue, but it'll likely be resolved in the next few years, either by researchers finding ways to train AIs on smaller (legally owned) data sets or by working out a copyright solution (e.g. clearer laws; development of payment systems; etc). So while the copyright issues ARE issues, they are unlikely to be long-term issues
    c. The AIs are dumb - again, short-term issue.
    d. (Mis)Representing dead, actual people - I definitely see the issue you raised in terms of right to one's own life, image, reputation, etc, but I think these issues are solvable in much the same way they are addressed for non-AI purposes, such as books and museum exhibits. I think that once the AI personalities are smarter and more character consistent, that they're not necessarily that different from many books or exhibits about the same people - who may still have living family. So again, I see this as a valid issue, but not a serious obstacle.
    (That said, I do think there's an interesting question of "why portray these people in a first person way when we don't have first person documents of their thoughts etc". However, I personally don't find it terribly persuasive. That is, I think the educational gains from the format would outweigh those concerns. [And honestly, 'speaking' to historical people this way feels like a genuinely fun and engaging way to learn. Reminds me of Star Trek.] At least, that's my first impression of the issue.)
    anyways, once again, thank you for the great video, and thank you for letting me share my perspective

    • @lauryburr7044
      @lauryburr7044 4 місяці тому

      I'm currently looking at AI as part of my philosophy degree course. To be honest I don't share your theory that the current problems are short-term issues. The core issue is that, in my opinion, AI does not *understand* - it proceeds essentially by "pattern matching" and when that process results in wrong information, it is often unable to realise/identify its error, and I don't see any way of that issue being resolved. Yes, if a specific error (or, possibly, type of error) is found to recur, some sort of programming "tweak" - such as altering a few of the billions of parameters in current LLMs, or maybe adding some very specific "if..then..else" logic - one can never be too sure whether those 'improvements' might result in the generation of other errors, as no-one can understand how every parameter intereacts with all the others. Incidentally, I feel that this same problem is inherent in graphical apps as well as text apps such as ChatBots because, again, the software does not *understand* the content of the image - it's merely a collection of pixels where certain pixels 'relate' to certain other pixels - for unknown reasons. Thus, a system devised to distinguish between photos of wolves and those of huskies got some wrong because it studied the whole photo not just the animal, thus taking into account its environment (which was irrelevant. Photographing a husky in a snowfield does not, in fact, make it a wolf!) In other words, I believe that the LLM concept is fundamentally flawed.
      Regarding (mis)representing humans, I suspect that the more powerful and convincing such bots become, the more dangerous they become. Simply, they WILL be used for illegal and/or immoral purposes (in the context of living characters) while recreations purporting to portray long-dead people, while acceptable (maybe/probably) in fictional contexts, would be of very dubious value as pseudo-educational/pseudo-intellectual tools. As next-next-generation Star Trek yes - education, given the risk of (accidental or externally manipulated) false information, absolutely no. Fun does not trump (weak political pun maybe intended...) accuracy. It's one thing to say "we believe that Mr X thought...", quite another to show a video purporting to be Mr X saying "I believe that..." - with or without footnotes pointing out that the portrayed information might be in error, many will believe it. I'm pretty sure that psychologists would back that up.
      Using more selective data sets sounds like a good plan but the development of LLMs has shown that size really does matter, at least for the "training data" - and that already opens the door wide for the sorts of errors that we see. Using a restricted dataset as its iformation base (as opposed to its language base) raises one enormous question: who chooses which documents to include or exclude? What about the problem that a lot of the most academically reliable text is in the form of academic papers behind paywalls? It'd need a vast number of manhours to make those choices manually - or would one rely on algorithms to select them, and who would define them and how?

    • @NirvanaFan5000
      @NirvanaFan5000 4 місяці тому

      @@lauryburr7044
      I appreciate the well thought out response. Here's my reaction:
      1. Yes, current AI algorithms have deep flaws and limitations. However, they will still have useful roles in research, even if it's just pattern recognition. I also believe that we'll find new algorithms and have other breakthroughs in the future.
      2. There's definitely dangers in the existence of the AI persona tech, but regarding its use for education, I don't see it as that problematic compared to what already exists, and I think the benefits of it will cause society to use it, even if you and I agree that it has flaws.
      3. I've seen reports about AIs making much better use of smaller but more detailed training data. I'm not sure that I understood your other criticisms here. You seem to be asking who decides which information the program uses. As for which information is included for its base, I assume it would be similar to books now: diff people write books about history and include different information. some books are endorsed by trustworthy institutions and generally have more respect and popularity; some... not that.
      As for paywalls - again, that may go to humans to work out individually (though I envision new approaches to copyright over the next decade that provide broad access for information bases). I don't think humans will not be involved in research. They will need to do stuff like define parameters or set a specific goal. But the AI can do the grunt-work of examining texts and artifacts for user-defined features.
      anyways, thanks again for thought provoking comment

    • @lauryburr7044
      @lauryburr7044 3 місяці тому

      @@NirvanaFan5000
      Hi - thanks in return for your thoughtful reply. sorry it's taken me so long to respond! Taking your paragraphs in order here are my thoughts:
      1 There are, I believe, many areas where AI is already doing great things and I guess that they also have some form of algorithmic base - for example, detecting early warnings of cancer before either medics or patients are aware of problems. My concern iw with those approaches that use the LLM-type logic (and in a sense pattern-recognition, I believe, follows a logic that at the conceptual level is similar - LLMs look for relationships between each word and those words 'near' it in the text, while for pattern recognition replace "word" with "pixel". The fundamental problem, as I see it, is that once one connection, for whatever reason, takes one "off track" there can often be no way back - a bit like chaos theory, or analogous to falling into a wormhole into another universe. I wonder how one could build in checks-and-balances to detect such "quantum leaps" and get back on track (LOL, I wonder whether ChatGPT has an answer to that?)
      2 Dare I ask which nation's education system you're asking about as a baseline for comparison? I think there's a fundamental, conceptual problem with having an AI-driven educational system if its baseline advice is to use AI for learning! This leads to a phenomenon which I call the oozlum bird phenomenon - if you aren't familiar with this term, it's an old English one (I've no idea how old - older than me, certainly (75)) - the oozlum bird is posited as a bird that flies around in ever-decreasing circles until it disappears into its own, let's say, "nether regions"! And yes, trials of LLM-driven experiments have emulated this problematic avian: Output from such a model (I've no idea of the volumes involved) were fed back in as a new set of training data and the process iterated: as I recall, the (expert) speaker said that at the fifth (or sixth?) iteration the output was total gibberish - presumably a build-up of the random "quantum leaps" leading down too many wormholes, a sort of multiplier effect. And there's a danger of just this happening as more and more internet "information" is based on AI regurgitations of itself. Also, I assume that in an AI-run system, students will by default get their essays written by AI - worryingly, there have already been studies suggesting a downturn in IQ among those using AI "brains" rather than their own. Again I have visions akin to the E M Forster short story "the machine stops" (ca. 1905, iirc). By contrast, what would AI's reaction be to an essay containing a genuine new idea, something not in its dataset? Something akin to Dirac's quantum science formulae - would AI's response be a simple rejection, akin to einstein's response to the uncertainty implications of Dirac ("God doesn't play dice")?
      3 My understanding of small-dataset approaches is that they still need "base-level" training on as much data as possible, but use the smaller, more relevant datasets for specific purposes. If one assumes (as many people do, though I'm a bit of a sceptic/skeptic) that we can only "think" because we have language, relying ONLY on small datasets would be like going into a science lab without having language to frame our decisions on what to do - and AI needs those LLMs (not SLMs - small language models!) to learn how to communicate reasonably well.
      Hmm, on paywalls at least two of the large academic paper archival sites have hit the headlines recently because they decided - without reference to the writers of the articles etc in their care - to allow builders of LLMs to access their databases. Potentially a solution to the issue of data reliability etc, but wearing my philosophy-of-ethics hat I have to question the morality of such a move. Yes, I agree that copyright law will need to be reframed but NOT to favour LLM builders at the expense of the knowledge creators: recall that the original copyright laws were introduced to protect creative talent from having their work "stolen" and mass-produced thanks to the economies of the (then) relatively new printing industry.

    • @NirvanaFan5000
      @NirvanaFan5000 3 місяці тому

      @@lauryburr7044 Thoughtful reply. I only have time for a quick response:
      1. Logical concern. I think though that future AI will solve this. (like, in the next 2 years.)
      2. Similarly, I think we'll solve the AI gibberish issues. And what I envision future education looking like is 1 teacher who sometimes addresses the whole class but who often acts more as a facilitator when students get stuck working with their AI tutor. The teacher will monitor progress etc.
      3. I think companies will either pay for data or that copyright law will evolve. I agree that we don't want to harm creators, but I think there's a lot of room to figure out a negotiation.
      In short, as I wrote before, I think these are all very valid issues but I also think they'll be solved soon.

    • @lauryburr7044
      @lauryburr7044 2 місяці тому

      @@NirvanaFan5000 Thanks. Re: 1 I'm coming to the conclusion that the issues are indeed solvable but I'm not so optimistic as you re: the timescale. My main concern is still our lack of knowledge of what's happening "under the hood" and what I perceive as a lack of willigness to address that. (And yes, I am a bit of a cynic sometimes!)
      Re: 2. Again, once people look 2under the hood" I agree there's a fighting chance of resolving the gibberish issues. Yes, the classes would need staff present as facilitators but my core concern is over the online content and how much of a role the staff will have in challenging what "the computer says" (even some AI apps - eg ChatGPT - admit that they can get things wrong: we must never lose sight of that.) I'm not sure whether I've raised this issue before but I'm worried about the potential acceptability of exams permitting AI-generated answers: this leads to the situation where we are only training people to ask the right questions - AI can only answer questions on the basis of what other people have said: in other words, if we want NEW answers, NEW theories that'll need humans (Galileo, Newton, Einstein, Dirac ...) and if we don't train future generations to think then I think we're screwed!
      Re: 3. Agreed.

  • @Avaloran
    @Avaloran 4 місяці тому

    You might want to refrain from click bait titles

    • @lauryburr7044
      @lauryburr7044 3 місяці тому +1

      This, in turn, looks to be a cliché response. Ther's some serious, meaningful discussion here as it's a development that has enormous implications for humanity's future.

    • @Avaloran
      @Avaloran 3 місяці тому

      @@lauryburr7044 How does my comment relate to the content of the video to you?

    • @lauryburr7044
      @lauryburr7044 2 місяці тому

      @@Avaloran Let me explain why I responded as I did to your first comment ("You might want to refrain from click-bait titles") This is how I "got there":
      1 As I'm seeing this page, it appears that your comment was the very first.
      2 Therefore, your comment is addressed to Chrsiteah, who posted the video (and is presumably the speaker).
      3 I see comments labelling posts on various sites as 'clickbait' and, very often, they appear valid. But the structure of a youtube page such as this doesn't lend itself to the tedious, frustrating chain of "let's click just once more and maybe I'll see what the opening post suggested..." and "click" and "oh. Let's click just once more..." and repeated ad infinitum.
      4 Had you wanted to be helpful, suggesting a better title might have helped Christea. To be honest, though, I found the title appropriate as she does then go on to explain why she used the title. And therein is the core of my reason for calling your comment a cliché: "clickbait" posts don't deliver what they make you expect (but lead you to make lots of clicks, see [or ignore!] lots of ads, and get nowhere) whereas this does deliver what the title encourages us to expect. A clickbait title has to promise something, but not all titles that promise are clickbaits.

    • @Avaloran
      @Avaloran 2 місяці тому

      @@lauryburr7044 I respectfully disagree

    • @lauryburr7044
      @lauryburr7044 2 місяці тому

      @@Avaloran Fair enough, but could you be a bit more specific? Were you suggesting that Christeah could have chosen a better title, and if so, what would you suggest?