Artificial Intelligence vs. Mathematics

Поділитися
Вставка
  • Опубліковано 27 вер 2024

КОМЕНТАРІ • 114

  • @pallasproserpina4118
    @pallasproserpina4118 9 місяців тому +27

    7:56 yes, numbers that are both prime and perfect squares *are* pretty hard to find. i wonder why that might be

  • @UlfFormynder
    @UlfFormynder 9 місяців тому +22

    The ChatGPT snippets remind me of the fact core from Portal 2. They become significantly more amusing if you read them it its voice.

    • @ennayanne
      @ennayanne 9 місяців тому +4

      Oh my god, I've never noticed before but you're totally right

  • @emilyrln
    @emilyrln 9 місяців тому +19

    Sharing this with my dad, who is a teacher's aide and had a student who was interested in the 4-color map theorem!

    • @JNCressey
      @JNCressey 9 місяців тому +3

      Note that it only applies if all the regions are like simple connected blobs. If you want to colour a country's exclaves with the same colour, then you can make maps that require arbitrarily many colours.
      Water is also a region that isn't simple if you want all lakes to be the same colour.

    • @ComboClass
      @ComboClass  9 місяців тому +6

      In the episode I clarified “neighboring regions” so exclaves would be counted as separate things, and yeah lakes would also have to be counted as the same type of region (like if you are coloring “cities”, each disconnected exclave or lake would be considered its own city) which is why I think of it as “neighboring regions” and not “regions with the same name”

  • @docopoper
    @docopoper 9 місяців тому +45

    Sometimes as an AI researcher I worry about the way the world is going with AI. It makes me happy to see how small minded it is to assume the digital world is more important than it is. It's good to see that there are people like you who are great at maths and much prefer to be out in nature.

  • @organMike
    @organMike 9 місяців тому +18

    You asked it for interesting facts about the number one, and it didn't even mention that it's the loneliest number you could ever do?

    • @emilyrln
      @emilyrln 9 місяців тому +10

      Two can be as bad as one; it's the loneliest number since the number one.

  • @Nachiebree
    @Nachiebree 9 місяців тому +8

    I don't know if it's the first paper published with use of a computer but this is one I like:
    COUNTEREXAMPLE TO EULER'S CONJECTURE
    ON SUMS OF LIKE POWERS
    BY L. J. LANDER AND T. R. PARKIN
    Communicated by J. D. Swift, June 27, 1966
    A direct search on the CDC 6600 yielded
    27^5 + 84^5 + 110^5 + 133^5 = 144^5
    as the smallest instance in which four fifth powers sum to a fifth
    power. This is a counterexample to a conjecture by Euler [1] that at
    least n nth powers are required to sum to an nth power, n>2.
    (Yes, this is the whole paper.)

    • @ComboClass
      @ComboClass  9 місяців тому +6

      Yeah I mentioned that paper in a previous episode, I love it haha.

  • @rlstine4982
    @rlstine4982 9 місяців тому +10

    This channel is totally underrated.

  • @josephrissler9847
    @josephrissler9847 9 місяців тому +79

    I don't think the ChatGPT responses are a result of minimizing computation time. ChatGPT is built to provide correct language, and doesn't build much in the way of a mathematical model for itself. For what ChatGPT is designed to do, everything it produced was (grammatically) correct. Any mathematical processing that resulted from this was just ChatGPT parroting its training data, and perhaps recognizing some "grammatical" relations between numbers.

    • @landsgevaer
      @landsgevaer 9 місяців тому

      Just like humans tend to do, you mean...? 🤔

    • @ComboClass
      @ComboClass  9 місяців тому +16

      While it's decent at grammar, it's very bad at logic. There were times (I don't think I showed these in the episode but they are saved on livestreams on my bonus Domotro channel) when it would say stuff along the lines of "...therefore, [so-and-so] is true" and then later in the same response say "...therefore, [that exact same so-and-so] is not true".

    • @duffdl
      @duffdl 9 місяців тому +8

      @@ComboClassthe underlying mathematics of generative transformers is largely probabilistic, so it is simply computing what is the most likely character combination to follow after the previous one + some additional contexts + part-of-speech tagging, lemmatizatio , word stemming, removing stop words, etc.

    • @ashheilborn
      @ashheilborn 9 місяців тому +3

      The trouble with chat gpt is that it's not even good at grammar. It's literally just spouting nonsense that would probabilistically follow from the question asked, using its training data, which is basically the entire Internet, used without permission.
      So, it's not using logic at all. It doesn't have grammar programmed in either. It didn't actually even have real words. It has tokens, which are word parts; a few letters at a time usually, sometimes parts of two words.

    • @landsgevaer
      @landsgevaer 9 місяців тому +1

      @@ashheilborn Not sure what ChatGPT you use, but it is at least very decent at grammar.
      Concerning it being a statistical model that spouts tokens: if I posit that that is also what your brain does, how would you argue against that?

  • @CrypticConsole
    @CrypticConsole 9 місяців тому +3

    The reason GPT is bad at math is that the integer tokenization you get when using BPE is incredibly bad. The model is basically doing the entire computation and then reversing the result to show you, while also seeing the numbers like
    |546|78|+|234|5=|
    where each token boundary is the pipe symbol in that expression. This is making it virtually impossible to learn math. If you take a model like llama with much better tokenization of numbers you can get a massively improved result on math benchmarks. There has been research in this space such as WizardMath

  • @whatsthisidonteven
    @whatsthisidonteven 9 місяців тому +6

    7:14 made me audibly laugh, well done

  • @algorithminc.8850
    @algorithminc.8850 9 місяців тому +2

    Very important topic - fundamental to education. Thanks. Been in machine learning and control systems, since the 80's ... and view these machine learning bits and digital computing - as tools - to augment what goes on in the mind - to help solve problems and further ideas from the mind. All the "old school" machine learning types I know ... electrical engineers (like myself) or mathematicians - all have strong mathematics skills and a true love of mathematics (and simulation). So all that "AI" being created is based on the work of mathematicians that enjoyed using their mind. In any case, great channel with lots of great topics. All the best to you and Cheers ...

  • @spacechemsol4288
    @spacechemsol4288 9 місяців тому +1

    8:31 here is the misconception (i think AI companies use misleading terms deliberately). Prompts are neither questions nor instructions for the LLM. Thats how we might formulate them, but the prompt is simply the starting point for token guess game. The AI extrapolates language patterns (which often results in question/instruction followed by answer style text) but has no actual concept of questions or instructions and is therefore incabable of "following" instructions.

  • @BeardedMan8319
    @BeardedMan8319 7 місяців тому

    23:10 Wait, did you eat that pastry off the floor?

  • @josephrissler9847
    @josephrissler9847 9 місяців тому +2

    Glad you're feeling better!

  • @louierichardson4750
    @louierichardson4750 9 місяців тому +2

    The answer to the question for me, about computational proofs such as the map/node ones, is that it is no longer a mathematical proof as it is falsifiable.. but scietific evidence as it must be falsifiable. But we are so so sure of certain scientific evidence that it doesn't faulter in our daily lives, so we take it as absolute proof.

    • @landsgevaer
      @landsgevaer 9 місяців тому +3

      Is it falsifiable?
      As in: you might find an error/bug in it if you went through it?
      How is that different from a proof by humans...?

    • @peglor
      @peglor 9 місяців тому

      Once the process for generating the graphs and evaluating their colorability are both freely available people can write their own versions and see whether they can reproduce the result. That's how most physical science works anyway, but it was a bit of a departure for mathematical proofs.

  • @hijackjoe
    @hijackjoe 2 місяці тому

    It couldn't tell me a correct character count in a paragraph.

  • @DanaTheLateBloomingFruitLoop
    @DanaTheLateBloomingFruitLoop 9 місяців тому +8

    So what I take away is that just because some computer, whether meat-based or silicon-based, can be right about some things, one shouldn't assume it's always right, no matter how confident it sounds.

  • @УэстернСпай
    @УэстернСпай 4 місяці тому

    6:10 Terrance Howard 🤣😂

  • @valleferrer8508
    @valleferrer8508 9 місяців тому +3

    i love you man

  • @ennayanne
    @ennayanne 9 місяців тому +9

    i have no interest in mathematics but i can't stop watching you, such unique energy

    • @ComboClass
      @ComboClass  9 місяців тому +4

      Don’t worry, there will be plenty of non-math episodes too. Although I love numbers, I also love other things (music, nature, games, philosophy, comedy, etc.) so there will be a variety of topics over time :)

    • @ennayanne
      @ennayanne 9 місяців тому

      @@ComboClass I've noticed! The random squirrel feeding segments are adorable, love seeing them little guys and the relationship you have with them! And as much as maths makes my brain hurt, it is fascinating to hear you talk about. I especially loved the connect 4 video and the one you made about how our clocks don't make any sense 😂 You've got a unique perspective on the world and it's great ❤️

    • @ComboClass
      @ComboClass  9 місяців тому +2

      Thanks! Glad you appreciate :)

    • @peglor
      @peglor 9 місяців тому

      The Ace Ventura of backyard mathematics 😀.

  • @josephrissler9847
    @josephrissler9847 9 місяців тому +3

    Dunning-Kruger effect in chatGPT

    • @emilyrln
      @emilyrln 9 місяців тому +1

      Dumbing-'Puter effect? 😛

  • @cubicinfinity2
    @cubicinfinity2 9 місяців тому

    I'm not sure why, but 7:54 is very funny to me.

  • @stickmcskunky4345
    @stickmcskunky4345 9 місяців тому +5

    Another excellent video Domotro. I really enjoyed the throwing of objects at you.. not just because it's fun to see random everyday objects as projectiles and your physical reaction to them being thrown at you, but because it feels like another layer of symbolic meaning to add to the pile, both in terms of the subject of the video as well as the whole channel, and just existence really.
    Life throws stuff at us every day. Sometimes it feels incessant, other times there are gaps and so you don't see it coming. Sometimes it's only one thing.. easy-peasy. Other times there are simultaneous instances.. harder to identify and track two or more things at once.
    Sometimes it's new things and we don't know how to react.
    The explosion of computer learning models and eventually full AI onto the world scene is bound to create a situation where the rate of novel things being thrown at us simultaneously increases vastly.
    We will look back and miss the days when it was just bagels. ❤

  • @ZephyrysBaum
    @ZephyrysBaum 9 місяців тому +6

    Only 40 views? What is the algorithm doing?

    • @litigioussociety4249
      @litigioussociety4249 9 місяців тому +3

      Maybe UA-cam's A.I. wasn't happy with the video.

    • @Somebodyherefornow
      @Somebodyherefornow 9 місяців тому +3

      155 now!

    • @omatic_opulis9876
      @omatic_opulis9876 9 місяців тому

      ​@@Somebodyherefornow159

    • @ComboClass
      @ComboClass  9 місяців тому +12

      It will probably grow over time (the algorithm doesn’t promote my videos instantly yet and they take time to spread). If anyone wants to help, remember that extra comments and watch time will help the algorithm like this channel more. In any case, I’m happy that all of you are watching/appreciating :)

    • @owenpawling3956
      @owenpawling3956 9 місяців тому +4

      @@ComboClassCommenting for support! I’d love an episode on fractional calculus!

  • @robertmauck4975
    @robertmauck4975 2 місяці тому

    7:58 I'm sorry, both prime and composite?

  • @infectedrainbow
    @infectedrainbow Місяць тому +1

    Anyone else unreasonably amused by his desperate attempts to increase the chaotic insanity? Like, "Shit! Shit! I set this on fire and it didn't even fall over!". The madman groans as he gingerly grasps the unburnt side of the conflagration. Flinging it into the air, he refuses to look at it. As if this would somehow betray his meddling to the camera.

  • @scragar
    @scragar 9 місяців тому +1

    7:48
    I think for the statement about 1 it's possible it's mixed up because people need to keep reiterating that 1 isn't prime, while there's a lot less effort put into explaining that 4,6,8, and 9 aren't prime.
    It's just taken the logic the wrong way around, 1 not being prime doesn't mean 1 is the only non-prime. That's a pretty major logical error though.

    • @landsgevaer
      @landsgevaer 9 місяців тому +1

      1 is a unit. That is different from 4, 6,.. that are composite (or from 2, 3,.. being prime).
      The fact that 1 doesn't belong to the primes doesn't imply it belongs with the composites.
      1 is an entire different little animal. So, no surprise it is treated differently.

    • @scragar
      @scragar 9 місяців тому +1

      @landsgevaer
      I get that, but the AI doesn't, it's trained on large volumes of text(most of which is written by teenagers given how most internet demographics look) most of which is aimed at non-knowledgeable people.
      So it doesn't see the reasoning very often so it plugs in some reasonable sounding nonsense based on what it does hear very often(which is in most places explaining primes the very explicit assertion that 1 isn't prime listed alongside the definition).

  • @CaedmonOS
    @CaedmonOS 9 місяців тому

    Wouldn't be a combo class video if something didn't catch on fire😂

  • @stnhls
    @stnhls 3 місяці тому

    GPT3 was pretty bad at maths, GPT4o can solve all your problems, it can solve the questions from my university textbook straight from picture, it also solved the maths needed for my masters thesis.

  • @FinalEyes777
    @FinalEyes777 9 місяців тому +1

    Shift your bright mind towards the metacrisis solutions. You have already become legendary, you should now become immortal. lol And I'm kidding but I am curious of your opinion on things outside the scope of mathematics as well. BUT I understand the reasoning perfectly in remaining within a certain zone of service to knowledge that for the most part isn't like giving away monkeys paw wishes. ie Some knowledge and some tech is best left undiscovered. Mathematics seems to always serve the good in some way.

  • @lizardude9114
    @lizardude9114 9 місяців тому

    Bing’s AI copilot seems to be actually really good at math I haven’t tested it extensively but from what I did it was able to do basic arithmetic, algebra and at least up to what I know in pre-calculus 😂

  • @kerrynewman1221
    @kerrynewman1221 9 місяців тому +3

    Seems AI gives a word salad response like politicians.

    • @emilyrln
      @emilyrln 9 місяців тому +1

      At least it uses complete sentences 😂 although speaking live, it's easy to lose track of what you've already said and create run-on sentences or fragments.

  • @afjelidfjssaf
    @afjelidfjssaf 9 місяців тому +1

    very good video!

  • @Ragnarok540
    @Ragnarok540 3 місяці тому

    Is scary how confident is chatgpt at lying and saying general nonsense. And even scarier that some coworkers use it daily and take it's output as gospel.

  • @bozhidarmihaylov
    @bozhidarmihaylov 4 місяці тому

    A Meal + half a meal = a Meal 😊

  • @hoagie911
    @hoagie911 9 місяців тому

    Me when I'm Domotro from Combo Class

  • @jagsawpawzzle488
    @jagsawpawzzle488 9 місяців тому +1

    I see a cat. :D

  • @gJonii
    @gJonii 9 місяців тому +1

    This video makes me suspect you've been talking to the free version of ChatGPT, which is extremely restricted/stupid compared to GPT-4. You didn't seem to at least address the fact that there are multiple models. It's more or less like discussing Wolfram Alpha but the spending the entire time just getting results from phone keyboard autocomplete.

    • @ComboClass
      @ComboClass  9 місяців тому +1

      I did say in the video that I wasn’t a paying consumer. And the free version GPT-3.5 (and even older worse versions) were what the majority of people were using and what the media was mostly discussing, so I don’t see what’s wrong with me analyzing it

    • @gJonii
      @gJonii 9 місяців тому

      @@ComboClass You're gonna be lulled to a massive false sense of security if you think the free version derping around is even close to the actual tool people pay to use. I tried the free one too, deemed it glorified text autocomplete, and had a sigh of relief, humans still weren't obsolete, but just in case I decided to try the paid version. Surely it wouldn't be that much better, I thought.
      It is vastly better. GPT 3.5 is dumb enough that you can't compare it to a person really. I'm concerned GPT-4 has a genuine case that could be made its smarter than your average human. Not smarter than a super smart human, but, smarter than average. And, with GPT models, there's the issue that their AI assistant mode is taking a significant toll on their function, essentially they're a more general intelligence which is kinda just operating based on written instructions about how AI assistant should behave when receiving your math problems. Using the APIs and a bit of tinkering, even GPT 3.5 turns into a really scary thing if it drops some of its operational handicap, so that would be the actual publicly available state of the art, these models but with less rules, or more math-focused rules for answers.

    • @gJonii
      @gJonii 9 місяців тому +1

      @@ComboClass But, for what it's worth, 3.5 imo is pointless. All free tier models atm are GPT 3.5 level so they are basically terrible and useless, unless they've been fine-tuned for some particular use case.

  • @JustinStark-p2g
    @JustinStark-p2g 4 місяці тому

    Omg fam😂 I had the exact same problem!!!

    • @JustinStark-p2g
      @JustinStark-p2g 4 місяці тому

      But low key, one thing that chat got was right about (on accident) and for completely the wrong reasons but 1×1=2 is actually a true statement...🤯

  • @kennyearthling7965
    @kennyearthling7965 7 днів тому

    AI are not Robots!
    AI = artificial intelligence = mind
    Robot = machine servant = mind + body

  • @paulkonig9942
    @paulkonig9942 7 місяців тому

    -1/2 is negative half

  • @kingofthend
    @kingofthend 9 місяців тому

    I have a certain level of prejudice against using my brain myself (is hard) so I guess I'll have to side with the AI on this one.

  • @Tata-ps4gy
    @Tata-ps4gy 9 місяців тому

    GPT is a language processing model. We humans hold language so dear we equate it with general intelligence. But this isn't true for artificial intelligence.
    A Casio calculator is 100% accurate on arithmetic yet downright incapable of processing language.
    GPT is able, for example, of giving accurate answers when it comes to history, literature, or biology as it can understand the sources (which are written in natural language).
    In the future, there may be a methematical proof AI that thinks if proofs by itself and expresses them in logic notation.

    • @gJonii
      @gJonii 9 місяців тому

      ChatGPT is already able to produce mathematical proofs. This video for some bonkers reason used the crippled free version which is significantly less intelligent than the GPT-4, so you got significantly crippled resultse
      But yeah the tech you speak of, it's been widely available for a long time already.

    • @Tata-ps4gy
      @Tata-ps4gy 9 місяців тому

      @@gJonii thanks for the info

  • @sleyeborgrobot6843
    @sleyeborgrobot6843 9 місяців тому

    you can make a free account and feed it the keys to your database and give it the source code to your website and get sued for using their intillectual property you made for them. classic.

  • @luckss4659
    @luckss4659 9 місяців тому +4

    please do not set your house on fire

  • @tuskiomisham
    @tuskiomisham 9 місяців тому +4

    yo! you have 38k subs now?! congrats! your videos are good as always!

  • @DuringDark
    @DuringDark 9 місяців тому +2

    notice how the squirrel did not attempt to take another half of a walnut

  • @johnowens8992
    @johnowens8992 2 місяці тому +1

    ChatGBT getting info from Terrence Howard 😂

  • @plentyofpenny
    @plentyofpenny 9 місяців тому +5

    first!

    • @microwave856
      @microwave856 9 місяців тому +2

      Congrats 🎉

    • @omatic_opulis9876
      @omatic_opulis9876 9 місяців тому

      your life is meaningless in the grand scheme of things.

  • @matthewfeeg1885
    @matthewfeeg1885 9 місяців тому +1

    Is this V sauce on a budget? JK this is an excellent analysis of language AIs.

  • @Anonymous-df8it
    @Anonymous-df8it 9 місяців тому +1

    If there were only 'thousands' of graphs to color, why couldn't you assign each graph to a person (as there are billions of people, but only thousands of graphs)? If everyone could come up with a valid solution for their graph, then the problem is solved. If not, it may be possible to prove that a fifth color was required for some of the graphs (notably, the ones people got stuck on), in which case, the problem would also be solved. This would alleviate any controversy in the proof

    • @ComboClass
      @ComboClass  9 місяців тому +1

      Some graphs are big/complicated but that is theoretically possible. The difficulty would be getting that many humans to collaborate on the same mathematical project but that type of thing is happening more over time

  • @willo7734
    @willo7734 Місяць тому

    Wow man, the fact that you compose your own music for the videos is super impressive. It’s really good and fits these videos perfectlv.

  • @jayjay_yah
    @jayjay_yah 26 днів тому

    Yes i too got basic math answers wrong and questioned it and it admitted its mistake

  • @mrblakeboy1420
    @mrblakeboy1420 9 місяців тому +1

    i have a math video idea. how can complex base systems be used, if it’s possible to represent all numbers in the real and complex number fields without multiplying by a system of another base

  • @Edmonddantes123
    @Edmonddantes123 4 місяці тому

    Most underrated UA-cam channel!!

  • @Ganerrr
    @Ganerrr 9 місяців тому +3

    gpt3.5 isn't known for math. The highest order things consumers can get is some of those programs that rebound GPT-4 to force it to double and triple check itself, which you can get some pretty good results with. Apparently gemini can beat like 95% competitive programmers on novel problems when given a massive amount of compute power, which easily requires very very strong mathematical abilities. Personally I'm freaked out, it was like 5 years ago that the highest end AIs had the IQ of a literal rodent, now the stuff that exists (albeit lots behind closed doors) that's smarter than people at most things. Really makes me think the 2nd Coming is close

    • @landsgevaer
      @landsgevaer 9 місяців тому +2

      The second coming?
      Bro...

    • @MurderWho
      @MurderWho 9 місяців тому +2

      The brief rejoinder to is that although GPT-4 was able to pass programming contest questions with something like 95% accuracy, that turned out to only be true for programming contest questions that came out before the date of the block of training data that GPT-4 was tested with.
      When people tested it on rewordings, or brand new contest questions that haven't had a chance to be integrated into training data yet, GPT-4's accuracy and ability to solve them dropped to . . . ~0.05%. Similar to previous versions of GPT; this is a recurring test people run and you can find different results, methodologies, and discussions about them if you want to delve into specifics.
      A lot of the "good" results we're seeing in many ways from chatGPT right now are laser-focused data-poisoning, where complex answers have been fed directly into the training data for specific association with manually chosen questions from known question-sets. This can often be examined somewhat directly by examining the "anti" space of some prompts, (looking for the least-matching result for a prompt, rather than the most-matching), where a more "organic" result will have an anti space that is fairly "continuous", and a more poisoned result will have an incredibly sharp change after you wander "far" enough away. There's a lot of air-quotes there, because none of those terms are rigorously defined except "anti", but if you try it out for yourself, you'll realize all those concepts for yourself soon enough.
      You can also often google GPT's code samples and find them verbatim on stackexchange, or as submissions to code contests.
      While I find the level of grammar that enables answer-recognition to work as well as it does to be a little impressive*, it doesn't seem as though GPT is capable of patching together any ideas of its own unless someone already has patched those specific ideas together in its training data somewhere. To the point where rewording a prompt to which it gave a perfectly good code sample for, in such a way that the underlying problem hasn't been changed, can often result in incorrect answers, or no longer returning a code-sample at all, where even a child could identify that it's the same problem. Try using names for variables like "delta" or "epsilon", for example, the mere use of which usually indicate a specific context in mathematics, (cauchy sequences).
      . . . and remember, GPT doesn't do any thinking. If anyone's ever said a wrong answer to something anywhere within it's training data, GPT will think that's as correct as any right answer in its training data, up to number of times each answer is stated in its training set. For common questions or basic questions, answer popularity is often a good indicator of correctness. But for anything that's rarely enough asked that it's difficult to google, that's a lot more of a problem.
      GPT is strictly Garbage In, Garbage Out, and Unknown Prompt, Random Answer.
      I encourage you to find weak points in whatever implementation of GPT or other LLM's you have access to, and play around with them and understand the weaknesses of the model better. You'll notice the same types of weaknesses peeking through even in spaces where GPT is "stronger" afterwards!
      *and testing GPT as a language-learning machine, I'm not terribly impressed with it over previous versions of itself. While it seems to get natural language right a lot of the time *in it's output*, it struggles a lot with interpreting problematic sentences in the input, which indicates that the output results only look as good as they do because of the scale of the training set and the internal sentence-size, outputting entire sentences from its training set rather than having learned how to correctly construct sentences. Some well-known examples of problematic sentences give good results, but their anti spaces are incredibly sharp . . .

    • @afjelidfjssaf
      @afjelidfjssaf 9 місяців тому +1

      @@MurderWho you nailed it. GPT is mostly useful as a glorified search engine for previously solved problems

    • @Ganerrr
      @Ganerrr 9 місяців тому

      ​@@MurderWho I'm going off claims about gemini not GPT-4, GPT-4 isn't the best at programming but apparently google's new ai is. there's more credit due as well, it's not like GPT is the best it can be "raw", there exists tools that basically allow it to talk to itself and plan out higher order thinking so it gets better results. (this is also on top of the fact GPT-4 is kinda stupid compared to the behind-closed-doors version of it that isn't bogged down with useless "safety" features which makes it generally less capable of intelligence)
      Some of Yannic's videos go into detail abt these things.
      [also you mention Cauchy sequences but I swear I've given some 0 context real analysis questions to GPT and it knew what I meant lol]

  • @soupisfornoobs4081
    @soupisfornoobs4081 9 місяців тому +1

    Great topic! I love these lectures

  • @Apophlegmatis
    @Apophlegmatis 9 місяців тому

    So chatgpt is the new wikipedia?

    • @ComboClass
      @ComboClass  9 місяців тому +3

      Nah Wikipedia is way more accurate, it actually cites its sources (plus just has less nonsense gibberish mixed in)

    • @Apophlegmatis
      @Apophlegmatis 9 місяців тому +1

      @@ComboClass I agree with you for now, but just like in Wiki's early days fighting the misinformation people would post, I have a feeling the AI will grow into more of a sensible version of itself, but still (always) needing the direct oversight by people to maintain accuracy
      Actually, this is already a thing, if in its infancy. I have a gig where I'm paid to check prompts/responses to help improve accuracy, safety, and other things. Also any specific info requires citing

  • @FirstLast-oe2jm
    @FirstLast-oe2jm 9 місяців тому

    I prefer the newer thumbnail pic for the video, though it's not the reason I clicked (I saw it earlier and waiting until later when I wasn't busy). Great video as always!

    • @ComboClass
      @ComboClass  9 місяців тому +1

      Thanks! And yeah I made the original thumbnail picture quickly so that I could release the episode since it had been a while since the last one, but then today I edited it closer to how I wanted it (although the thumbnails are still not my expertise haha)