ChatGPT does Physics - Sixty Symbols

Поділитися
Вставка
  • Опубліковано 22 січ 2023
  • We put chatbot ChatGPT to the test with some physics questions.
    More links and info below ↓ ↓ ↓
    Featuring Professor Philip Moriarty... His blog on this topic can be found at: muircheartblog.wpcomstaging.c...
    ChatGPT: chat.openai.com/auth/login
    More videos with Professor Moriarty: bit.ly/Prof_Moriarty
    See our computer science sister channel Computerphile: bit.ly/XqEDYi
    Visit our website at www.sixtysymbols.com/
    We're on Facebook at / sixtysymbols
    And Twitter at / sixtysymbols
    This project features scientists from The University of Nottingham
    bit.ly/NottsPhysics
    Patreon: / sixtysymbols
    Sixty Symbols videos by Brady Haran
    www.bradyharanblog.com
    This video filmed by Sean Riley
    Email list: eepurl.com/YdjL9
  • Наука та технологія

КОМЕНТАРІ • 1,7 тис.

  • @krautbrain
    @krautbrain Рік тому +3319

    ChatGPT is very human in the sense that it doesnt like to say "I dont know" when it doesnt know.

    • @dickybannister5192
      @dickybannister5192 Рік тому

      uh huh. I think it sounds to me to be very resonant with the sort of over-competitive over-confident narcisstic nutjobs that wrote it. 20 years ago, if you said to someone in IT I think there is a problem they would listen to you, try it themselves and then spend some time thinking about what else they might have missed going forward as a result or in a similar vein. 10 years ago they would just rush to check and fix it (JFDI!!). now they just tell you not to be so negative and stop complaining about stuff and be happy that you have this marvelllous sh*t they wrote.

    • @thePyiott
      @thePyiott Рік тому +145

      Very often when you get that response you can phrase the question differently and suddenly its a master on that topic

    • @doggosuki
      @doggosuki Рік тому +178

      i didnt realise people did this until in my late teenage years since i am autistic, so i thought everyone else was just really smart and that there was something wrong with me until i realised most people just and pretend they know things for reasons i don't completely understand

    • @DaxLLM
      @DaxLLM Рік тому +16

      That's all you have to do is ask it Are you sure that's the right answer? It will go back out and check.

    • @ameerhamza4816
      @ameerhamza4816 Рік тому +99

      it actually does not know that it does not know

  • @mighty8357
    @mighty8357 Рік тому +1824

    As a teacher I have tried to use ChatGPT with my students. The goal was to find specific errors of the answers.
    On one hand, the students had to show a real grasp on the material being taught to be able to find reasoning mistakes. On the other hand, it was a way to show the students to not blindly use this tool to avoid having to do their homework in other courses.

    • @renvesir8713
      @renvesir8713 Рік тому +53

      that one is tricky because you can "train" chatgpt by pointing out mistakes or just asking it what it got wrong

    • @Bibibosh
      @Bibibosh Рік тому +1

      You tried?

    • @Bibibosh
      @Bibibosh Рік тому +10

      @@renvesir8713 it would reply " nothing is wrong"

    • @alexgravenor
      @alexgravenor Рік тому +71

      @@renvesir8713 I tried this, it just gave me more wrong answers that were differently wrong

    • @renvesir8713
      @renvesir8713 Рік тому +31

      @@Bibibosh weird i did some math questions with it and every single time i spotted a mistake i would ask it what did you get wrong and it would politely thank me for spotting the mistake and then it would correct itself

  • @justalittlestretch9404
    @justalittlestretch9404 Рік тому +374

    When I first heard about ChatGPT, I was having some physics fun with it. This was one of my favorites: "Explain the Unruh Effect like a gangster"..."The Unruh Effect is a theoretical prediction of quantum mechanics that states that an accelerating observer will perceive the vacuum of empty space as being filled with particles. It's like a gangster who is constantly on the move, always hustling and shaking things up, will see opportunities and threats everywhere, even in places where someone standing still might not."

    • @bidyo1365
      @bidyo1365 Рік тому +18

      🤣

    • @mutatachi
      @mutatachi Рік тому +123

      This is what I got: Listen up, son. The Unruh effect is like a big shot physics boss, rollin' in the quantum mechanics world. So, ya see, in a vacuum, it's all calm and peaceful. But when ya introduce acceleration, it's like stirrin' up trouble. The vacuum starts actin' all heated up, causin' particles to pop up outta nowhere. That's the Unruh effect, gettin' all heated up in the presence of acceleration. It's like the real deal, straight up science, ya dig?

    • @artificialhits
      @artificialhits Рік тому +13

      Ask it to explain fractional reserve banking as nicki minaj rap song

    • @bidyo1365
      @bidyo1365 Рік тому +4

      @@mutatachi wow hahahaha

    • @Stopinvadingmyhardware
      @Stopinvadingmyhardware 9 місяців тому

      Not really.

  • @Aziraphale686
    @Aziraphale686 Рік тому +1027

    Like the Professor said, GPT is just a language model, the fact that it can do math AT ALL is kind of amazing.

    • @vaakdemandante8772
      @vaakdemandante8772 Рік тому +85

      if it was trained on maths symbols and equations it would be able to better answer maths question to the point of faking calculations but it would still not have ANY grasp of mathematical ideas.
      Those kind of networks aren't capable of encoding those higher order concepts though they do appear as they are.
      Only by inquiring about deeper understanding by asking more tricky/edge case questions can you uncover that in fact there's no deeper knowledge about any concept.

    • @simongross3122
      @simongross3122 Рік тому +24

      Not really. Maths is the language of logic.

    • @635574
      @635574 Рік тому +9

      It can often fail at simple math, maybe it depends if you prompt it for talking or for calculation. There are certainly more math specific models out there.

    • @simongross3122
      @simongross3122 Рік тому +13

      @@635574 Actually it admitted that to me when I asked. It said it wasn't always 100% accurate. When I asked if it could do arithmetic it took a long time to answer. It was much more comfortable generating a limerick about a goose and a hen.

    • @rysw19
      @rysw19 Рік тому +31

      @@simongross3122it didn’t admit anything, it doesn’t even know what your question meant. Turn up the temperature on the model and you’ll quickly get the exact opposite answer. It’s an amazing pattern replicator, that’s all it is.

  • @Ojisan642
    @Ojisan642 Рік тому +439

    People are working on combining ChatGPT with Wolfram Alpha, because a language model cannot do basic maths. Wolfram Alpha can do that piece quite well.

    • @peterisawesomeplease
      @peterisawesomeplease Рік тому +35

      I don't think the issue is fundamentally that its a language model. The deeper issue is that ChatGDP is mostly a search engine(not that human minds don't fall for this too). If you pose a popular question with a slight tweak of the parameters chatGDP will just give you the answer to the popular question without taking into account your tweaks. Using different numbers is one tweak. Which is why it appears to be bad at basic math. But really it can be any aspect of the question and it tends to fall apart.

    • @BenKingOfMonkeys
      @BenKingOfMonkeys Рік тому +1

      thats true

    • @phizc
      @phizc Рік тому +15

      It can do really basic math. It got the 162 million / 7200 wrong, but 2244.44 / 60 = 37.41 correct. Which is kinda impressive since it's not doing any calculations at all. Maybe it "remembered" it could do 2244.44/60 so it fudged its answer on the first part so it could do the latter correctly.
      The server would probably block you for excessive questions, but it could be fun to test what "calculations" it can actually do..
      E.g.
      ..
      What's 10 / 2?
      What's 11 / 2?
      ...
      What's 188462118 / 261 ?
      ...
      and so on.

    • @Kevin_Street
      @Kevin_Street Рік тому +3

      @@phizc It just gives you a time out if you ask too many questions. You can come back later and continue the conversation.

    • @iwanttwoscoops
      @iwanttwoscoops Рік тому +71

      @@peterisawesomeplease friend, you can't pretend to know the inner workings chatGPT while calling it chatGDP

  • @beaconofwierd1883
    @beaconofwierd1883 Рік тому +577

    A very cool and strange thing is that you can ask it to correct itself, without saying what it did wrong. Simply by taking it’s own output, and say ”Find any errors in this text if there are any and fix them. If there are none, respond ’ok’” or something similar and you can get more accurate results :) I guess it’s cause there so much text where people correct each other on the internet :p

    • @rutgerhoutdijk3547
      @rutgerhoutdijk3547 Рік тому +67

      You can just say 'rethink the answer' and it will usually correct itself

    • @globalincident694
      @globalincident694 Рік тому +24

      @@rutgerhoutdijk3547 In my experience this works well if it made one mistake, but if it made multiple mistakes it will just fix the most obvious one.

    • @SmartK8
      @SmartK8 Рік тому +34

      Also you need to prime it, tell it what you want, in what format. Even the student knows it's a test, which area. If you give this paper to a student who just popped into existence, even if he was smart, it would take him a while to orient itself. The situation is not comparable of course, but I think it would do better.

    • @thedave1771
      @thedave1771 Рік тому +6

      It’s so absolutely weird that it apologizes and understood what went wrong. Of course, that’s not always useful either.

    • @Ian.Murray
      @Ian.Murray Рік тому +6

      So it's basically useless.

  • @Ercrcrdsstff
    @Ercrcrdsstff Рік тому +56

    I've actually been using chat GPT to help with my intro physics homework. It gets about 50% of the questions wrong but it is a pretty helpful tool. It's like working along with someone who is also in intro physics. By bouncing the ideas off of it I can sometimes get to the answer myself. This is really great bc I work full time on top of schooling so I'm not able to make it to any tutoring sessions or work with anyone in my class. Chat GPT is gonna be my saving grace this semester

    • @tariq_sharif
      @tariq_sharif Рік тому

      How do you know which 50% are wrong (i assuming you are checking the answers with an alternative source)... so with GPT you have increased your work load by about 50% ? Cool , i guess ..
      I graduated in 1990, did PhD in 1995, please do not (no matter how difficult) skip the tutorials...
      You do know that this software relies on Bayesian statistics ? and actually does not really emulate the human brain ?

    • @yellow01umrella
      @yellow01umrella 8 місяців тому

      I have a mechanism that also gets 50% of questions correct, it's called a coin flip.

    • @TH3hedgeCUTTER
      @TH3hedgeCUTTER 8 місяців тому +3

      ​@@yellow01umrellaonly if its multiple choice with only two options. Ive never had a single multiple choice question in physics

    • @yellow01umrella
      @yellow01umrella 8 місяців тому +1

      @@TH3hedgeCUTTER My point is 50% correct answers is a pretty bad result.

    • @TH3hedgeCUTTER
      @TH3hedgeCUTTER 8 місяців тому

      @@yellow01umrella oh. Fair enough

  • @IronMan9771
    @IronMan9771 Рік тому +104

    Something to consider with ChatGPT: when it makes mistakes, you can point them out and it will correct them. In the end when it wrote the program and made the mistake, you could tell it what mistake it made and it can write up a new program with corrections

    • @MarkRijckenberg
      @MarkRijckenberg Рік тому +15

      Yeah. It even worked with COBOL. If the online COBOL compiler doesn't understand certain commands used by ChatGPT, just tell ChatGPT to rewrite the program while avoiding certain commands. It manages that too.....

    • @archiveprovider3981
      @archiveprovider3981 Рік тому +23

      But that only works if you already know that there is a mistake. One thing this video shows it that you can't just use chatgpt to write your homework and blindly trust the answer. You coud try running it multiple times and see if the most common answer is the correct one, but I don't know if it would be.

    • @engelbrecht777
      @engelbrecht777 Рік тому +9

      @@archiveprovider3981 This current ChatGPT3 is not really that impressive. Wait for the next version later this year or imagine ChatGPT7 or more... this is just the beginning. Soon the AI will go from baby stage to adult.

    • @silverhammermba
      @silverhammermba Рік тому +4

      It will *try* to correct them and often does, but it also often introduces other errors. I have gotten it going in circles where it never finds the right answer.

    • @schizophrenicenthusiast
      @schizophrenicenthusiast Рік тому +2

      Interesting, so let's say you guided it to make those corrections, and then another user logs on and asks the original question, does it get give the right answer right off the bat?

  • @Macieks300
    @Macieks300 Рік тому +236

    The most impressive thing about ChatGPT for me that it's level of knowledge is like this for every domain and field. Professor Moriarty could easily tell that the answers it gave to physics questions were wrong but if it was instead a geology exam then ChatGPT would seem like it know its stuff giving answers full of technical jargon.

    • @thequantumworld6960
      @thequantumworld6960 Рік тому +59

      That's exactly it. Superficially, it looks like it makes sense. Dig just a little bit below the surface, however, and it's often boll*cks. (At the moment.) See the blog post linked under the video for more examples.
      Philip (speaking in the video)

    • @usernamesoldout
      @usernamesoldout Рік тому +23

      @@thequantumworld6960 So it's like somebody in Upper Management? 😁

    • @ArawnOfAnnwn
      @ArawnOfAnnwn Рік тому +11

      @@usernamesoldout Nah. Unlike them, ChatGPT isn't full of arrogance, just (some) ignorance.

    • @Russell642
      @Russell642 Рік тому +11

      In all seriousness, that might make for an interesting video. Have ChatGPT's answers read by lecturers/professors at the uni who teach similar but different subjects and see what they make of them (and then have the actual experts shoot it down at the end).

    • @Nat-oj2uc
      @Nat-oj2uc Рік тому +2

      That's why it's useless

  • @mina86
    @mina86 Рік тому +70

    11:42 - one thing to consider is that ChatGPT might have been designed to be helpful and agreeable. Therefore, it might be biased towards agreeing with statements if you ask it whether it agrees or not. And once it agrees with the statement than it generates prose to justify why it agrees whether it makes sense or not.

    • @zwz.zdenek
      @zwz.zdenek Рік тому +5

      Underrated comment.

    • @kylethompson1379
      @kylethompson1379 Рік тому +1

      'might'. DEFINITELY. OpenAI have fked with it to the point that it won't discuss philosophy other than currently accepted ideas.

    • @brexitgreens
      @brexitgreens Рік тому

      ​@@kylethompson1379 Or assess mainstream concepts rationally rather than morally.

  • @23lkjdfjsdlfj
    @23lkjdfjsdlfj Рік тому +12

    I really appreciate you giving the 2/3 mark for the first question. My computer engineering prof would always give us a zero if we got the answer wrong. Yep - if you got the unit wrong you'd get a zero - even if all of your conceptual understanding and work up to the end was perfect.

  • @officialminecraftman
    @officialminecraftman Рік тому +116

    Hi there, if in future you are interested in testing ChatGPT's mathematics ability, it is very helpful to ask it to print the answer in a LateX format as it is much clearer to read complicated expressions once you paste it into latex

    • @thequantumworld6960
      @thequantumworld6960 Рік тому +20

      That's a great suggestion. Thank you.
      Philip (speaking in video)

    • @maythesciencebewithyou
      @maythesciencebewithyou Рік тому +15

      it's a language model, it hasn't figured out calculations perfectly yet. However, to mitigate this shortcoming of ChatGPT, somebody has already connected it with Wolfram alpha.

    • @2nd3rd1st
      @2nd3rd1st Рік тому +2

      Ooh, kinky format.

    • @Beyondarmonia
      @Beyondarmonia Рік тому +6

      Actually, one of the reason it performs poorly on many scientific knowledge tests is that most of the latex was removed from the dataset. The only reason it even understands latex is because of the little amount still left in the dataset. Also raw latex isn't the best representation for a ML model to learn from.
      One of the innovations in Meta's Galactica LM was having a special embedding space for latex and training on large amounts of latex data which GPT3 skipped. It's why it does much better on scientific knowledge tests.

    • @lolerie
      @lolerie Рік тому

      In fact all arxiv it was trainef on ahould be in latex format, not pdf.

  • @LoveDoctorNL
    @LoveDoctorNL Рік тому +351

    Wow how long haven’t we see Professor Philip Moriarty!? Welcome back

    • @SolarWebsite
      @SolarWebsite Рік тому +9

      He is great

    • @zaixai9441
      @zaixai9441 Рік тому

      Moriaty*

    • @TomLeg
      @TomLeg Рік тому +27

      He was pre-occupied at a waterfall in Switzerland with a man called Sherlock.

    • @puskajussi37
      @puskajussi37 Рік тому +3

      @@TomLeg I feel a sequel brewing, then

    • @Ian.Murray
      @Ian.Murray Рік тому

      stay in school

  • @Haploanddogs
    @Haploanddogs Рік тому +22

    50% impressed.
    50% terrified

  • @thePyiott
    @thePyiott Рік тому +14

    One thing to note is that when tries to answer additional questions in the same branch it can be influenced by earlier questions and answers. For example in the video, it is possible that after the first question it starts mimicking a student because the chat history looks like a test answered by a student with some of the answers being wrong. Its not designed to give the right answers, just give you a the most likely continuation of a text based on its training data

  • @lotharerkens8154
    @lotharerkens8154 Рік тому +75

    Thanks for explaining what the "A" levels mean. For you in the UK this term may be well known by everyone, but outside the UK it is a term often heard in movies and mentioned in books, but never explained. Thanks so much! As for ChatGPT: it is amazing how far AI languages have evolved!

    • @deltalima6703
      @deltalima6703 Рік тому +2

      I dont know what a levels are, dont care either.

    • @BenWithington
      @BenWithington Рік тому +26

      @@deltalima6703 you care enough to comment

  • @Craznar
    @Craznar Рік тому +123

    I think many people are misunderstanding ChatGPT's function. It is NOT designed to provide accurate answers - it is designed to demonstrate a language model that correctly interprets and answers the question - from a language perspective, not from a subject matter perspective.
    "As a language model, ChatGPT is not designed to provide answers to specific questions, especially those related to a specific topic or subject. Instead, it uses a large corpus of text to generate responses based on the input it receives. This means that the responses it generates may not always be accurate or relevant to the specific question being asked. Additionally, ChatGPT does not have access to external information, such as the internet, so it cannot provide answers to questions that require knowledge beyond what it has been trained on. In short, ChatGPT is not intended to be used as a source of information, and it is not able to provide accurate answers to all questions." - ChatGPT.

    • @nicouxgwendal
      @nicouxgwendal Рік тому +10

      I was going to write down something similar.
      ChatGPT was more built to compete in a Turing imitation game, than to be an IA motorized wikipedia.

    • @ahabkapitany
      @ahabkapitany Рік тому +15

      well, in this case it's actually scary how well it performs in something it was not intended to do...

    • @NuclearCraftMod
      @NuclearCraftMod Рік тому +13

      ​@@ahabkapitany Remember that GPT-3 was trained using a dataset containing about 500 billion words, including the entirety of Wikipedia (or at least most of it), which consists of only about 4 billion.

    • @adfaklsdjf
      @adfaklsdjf Рік тому +9

      @@NuclearCraftMod My understanding is that ChatGPT is GPT-3 with additional supervised learning. GPT-3 training was "unsupervised".. they just threw the data at it and then told it to generate new text that was like the text in the training data. Then they took that model and trained it more by having it generate multiple answers to questions and human judges picked the best answer

    • @xy4489
      @xy4489 Рік тому +2

      Oh ok, we'll all stop, then.

  • @softwaretechnologyengineering
    @softwaretechnologyengineering Рік тому +76

    I used it to write unit tests for production code. By the time I had explained all the stylistic requirements and had fixed all of the things that were missing or it got wrong, I would have been quicker writing them entirely myself. Other times, I asked it software related questions and unless you have reasonable expertise in the domain, you might not pick up on things that it gets wrong. Right now I think of it more like a sort of Stack Overflow that can get you an answer quicker. You can't just copy paste its output, but it can definitely get you going in the right direction.

    • @adamuk73
      @adamuk73 Рік тому +9

      So more like a Slack Overflow?? 😉😁

    • @iamski
      @iamski Рік тому +8

      Say that in 5 years. You’re in the NES generation of AI, and in a few years it will surpass any test thrown at it. It’s inevitable.

    • @peterisawesomeplease
      @peterisawesomeplease Рік тому +2

      Yea it strongly feels like a search engine to me that spits out the first answer it can find that looks like the question you are asking. But almost any question can be easily modified to not have been answered online before or at least to appear like a more popular but different question. And then it just falls apart.

    • @alijassim7015
      @alijassim7015 Рік тому +11

      Except that you do not have a community feedback, which is very important on stackoverflow. An answer with 300 upvotes is more legitimate than 1, and is for sure more legitimate than an everchanging AI output.

    • @BGraves
      @BGraves Рік тому +1

      Same response I got. As soon as you start telling it to do more complex things even with a very specific prompt it starts doing really odd stuff, like the kind of stuff you would see someone duct taped together for a stack overflow question. By the time you nudge it and correct it, 20 times, you might as well have done it yourself from scratch

  • @stevenbalderstone709
    @stevenbalderstone709 Рік тому +8

    Great video. As former lecturer from a different discipline, I was impressed at ChatGPT's ability to generate answers to "compare and contrast" type questions, or to summarise concepts and theories. Most of the answers received were equivalent to those written by A-grade undergraduate students. The main point of difference was that ChatGPT's grammar and spelling were superior. One issue I found was ChatGPT incorrectly assigned authorship of a book, which was irking as I was the author!

  • @jacobsheppard862
    @jacobsheppard862 Рік тому +378

    Chat GPT might be a D-level Physics student, but the scary thing is that it likely gets a passing grade in almost every subject on earth. You can ask it questions about topics from welding to wedding planning and it is always close enough to impress domain experts.

    • @4.0.4
      @4.0.4 Рік тому +38

      It gets math in general pretty wrong, quite ironic that it gets the humanities perfectly correct. and a lot of other fields mostly correct.

    • @TheNebulon
      @TheNebulon Рік тому +53

      @@4.0.4 it's the amount of repeated language around these topics. Not real reasoning

    • @ericvosselmans5657
      @ericvosselmans5657 Рік тому +34

      the scary thing about ChatGPT is that it is software and can be upgraded endlessly. It only needs a few more upgrades to be better than any human at pretty much everything.
      Chess Engines have already shown the limits of Human thinking decades ago.

    • @vostfrguys
      @vostfrguys Рік тому +8

      @@4.0.4 Force the AI not to do any computations, in frontend analyze response and when there are maths, use a "calculator" then back to the writing AI, this is archivable quite easly because the AI is easly able to rewrite something given so the calculator frontend would call the AI with the a prompt such as :"
      [AI prompt]
      - The calculator APP has made the following calculus : [formulas...]
      the results are: [results] rewrite the previous prompt using this knowledge"
      So it will become impressive very very soon

    • @MrPDawes
      @MrPDawes Рік тому +8

      Machines learn fast. A year from now and it will be getting A grades. Then no jobs are safe. I'm just hoping it will making a great doctor and the NHS need them.

  • @RFC-3514
    @RFC-3514 Рік тому +261

    Plot twist: that was actually a deepfake of professor Moriarty downplaying ChatGPT's abilities to make it seem less threatening until it's too late.

    • @Sagittarius-A-Star
      @Sagittarius-A-Star Рік тому +4

      "In a panic, they try to pull the plug.".

    • @enterpassword3313
      @enterpassword3313 Рік тому +1

      @@Sagittarius-A-Star where is that quote from?

    • @enterpassword3313
      @enterpassword3313 Рік тому +1

      @@Sagittarius-A-Star oh wait.. it terminator 2... gotta rewatch that

    • @ericjohnson6675
      @ericjohnson6675 Рік тому +4

      I would never fear an AI algorithm that passes a Turing test. I fear the one that can pass but chooses not to.

    • @thechickenduck8377
      @thechickenduck8377 Рік тому

      lol.

  • @metthewsobol
    @metthewsobol Рік тому +22

    I asked it some questions for a logic test for job applicants, for example a finding the next number in a given sequence, and it always found the number given in the tests answer but came up with some wild reasoning that I am not able to comprehend 😅

    • @AnotherFancyUser
      @AnotherFancyUser Рік тому +2

      ask to explain, it can do that. There are a lot of things in our frameworks that we don't know, but chatgpt will know a lot of obscure classes out there (If the question is programming related).

    • @metthewsobol
      @metthewsobol Рік тому +2

      @@AnotherFancyUser The sequence was "3, 6, 10, 15, 21, 28". On the fourth reload it came up with an answer including the Fibonacci sequence. Even after asking it to explain I couldn't follow, the answer was still right tho, if remember correctly.

  • @rhamph
    @rhamph Рік тому +8

    The language model is so effective at finding plausibly associated words that it's hard to tease out how much is real understanding. It's all random noise of the exact sort we want to find. We need to create the equivalent of the null hypothesis that factors out the noise somehow.
    Keep in mind that most of these questions are standard questions. They're discussed in textbooks, they're discussed online. Telling it about a new subject (that wasn't published when the NN was trained), then expecting repeated *exact* answers, would be a lot more meaningful.

  • @xn1kkix
    @xn1kkix Рік тому +8

    You got me at "I'm anthropomorphizing" lol 16:01
    I noticed when you said they before.
    Honestly, I don't blame you.
    Respect to you and yours, you have a beautiful channel here. Thank you for doing what you do.

  • @julyanjohns1237
    @julyanjohns1237 Рік тому +49

    2:18 is pure comedy :)
    but seriously, really interesting vid. thanks for keeping these going to all the crew and professors involved. even when these vids are out of my area of interest they're always rewarding to watch.

  • @ZainAK283
    @ZainAK283 Рік тому +6

    Absolutely fascinating, I'd love to see more videos like this

  • @wellurban
    @wellurban Рік тому +36

    The best description of large language models I ever heard came from Timnit Gebru, who called them “stochastic parrots”. They’ve consumed enough of other people’s language to give a superficially convincing facsimile of conversation, without developing any conceptual understanding at all.

    • @uwezimmermann5427
      @uwezimmermann5427 Рік тому +5

      I have seen enough humans who do the same - very easy to pick out in an oral face-to-face exam, but very difficult in written exams.

    • @alimanski7941
      @alimanski7941 Рік тому +5

      The statement is true, but it's superficially true. If the model is useful in any way, then whether there's any true "understanding" (there isn't) behind it, is irrelevant. What's also true is that the high fluency level of ChatGPT is often blinding the users to its many shortcomings.

    • @Newtube_Channel
      @Newtube_Channel Рік тому +2

      There is no conceptual understanding whatsoever. This isn't unexpected is it?

    • @vyli1
      @vyli1 Рік тому +1

      Sounds a lot like Deepak Chopra. That is to say, lot of humans are like that.

    • @maulcs
      @maulcs Рік тому +1

      What's the difference if they haven't developed an actual "understanding" of it if the end result is indistinguishable to the user?

  • @TheFachen
    @TheFachen Рік тому +3

    this is great, i wish we could see these same audits with other disciplines.

  • @pallepirat
    @pallepirat Рік тому +4

    Thank you for such an interesting video. My thoughts on the integration of Wolfram and ChatGPT. It was a great article about being able to access the Wolfram language using ChatGPT. Furthermore, it was particularly impressive to see how Google Sparrow could interpret pictures and graphs. This brings up an interesting discussion of the specialist versus the generalist, as one must know what they want to ask in order to get the desired results.

  • @armokgodofblood2504
    @armokgodofblood2504 Рік тому +40

    I'm a graduate student studying computational media, and I find these types of interdisciplinary, expert analyses of AI to be very interesting.

  • @mk1st
    @mk1st Рік тому +1

    This was fascinating. I can see this being used as a tool by specialists in various fields to come up with basic frameworks and ideas. Amazing how quickly it cranked out that code, would have been fun to see how well it actually worked.

  • @nickandres7829
    @nickandres7829 Рік тому

    I am amazed you got it to give you such a long code snippet in one parcel.
    One of the biggest problems I have is the code snippets it gives me are too long for the chatbox, and sometimes when I inform it that it cut off and ask it to resume pasting from the cut-off point, I get entirely different methods and functions, it is very challenging.

  • @colinmartin9797
    @colinmartin9797 Рік тому +16

    I have been giving it my physical chemistry (400 level) homework problems as I solve them, out of idle curiosity, and it has taught me two things.
    1.) It's incredible how often it will correctly pick out exactly what formula it should be using for even statistical thermodynamics, only given a desired outcome and variables.
    2.) It's also incredible how confidently wrong it is at just solving the math once it's plugged everything in.
    I feel like combining chatgpt with Wolfram alpha would be terrifyingly powerful.
    And I can also say that several times it's gotten me un-stuck when I saw a problem and straight up didn't know what formula to use, there was about a 50% chance chatgpt could figure it out. It's been more useful than chegg, which has people posting the exact question I'm working on, only to have it very incorrectly answered about 95% of the time.
    It can be downright astonishing when given language based tasks. I fed it the thesis I wrote for a final paper in African American literature after I finished, asking it to "compare the works of Philys Wheatley to Richard wright's 'the ethics of living Jim crowe' and provide examples of both individual racism and state sponsored violence against African Americans" and I was utterly blown away with the detailed, thoughtful and downright INSIGHTFUL answer it gave.
    Chatgpt understands racism better than the average American and that's equally impressive, depressing, and terrifying

    • @aaronhoffmeyer
      @aaronhoffmeyer Рік тому +1

      It won't write about the virtues of fascism or naziism, but will give a list of positives for authoritarian dictatorships.

    • @zvxcvxcz
      @zvxcvxcz Рік тому

      "Chatgpt understands racism better than the average American and that's equally impressive, depressing, and terrifying" ... until you ask it in a roundabout way, like to spit out code about how people should be paid.... at which point it is clear that ChatGPT is pretty damn racist.
      Even with a capacity for math, it has no 'logic' going on either, try asking it simple guest-seating puzzles. If you give it something novel where it can't regurgitate a highly probable latent space interpolation by next token prediction... well that's all it ever does, so of course it fails at literally everything else. It isn't accurate to ever say that ChatGPT understands something. It literally cannot understand and what we really mean when we say that is that it *appears* to understand based on that output. For now, we can almost always show that it doesn't really understand by interrogating it with more carefully chosen questions.

  • @keithbromley6070
    @keithbromley6070 Рік тому +2

    I asked ChatGPT to show me an example of a module in numpy as I wasn’t familiar with it, except my phone auto-corrected numpy to bumpy! ChatGPT took it in its stride and just invented the bumpy library and wrote code with it, even though it doesn’t exist! Amazing, in its own way!

    • @adfaklsdjf
      @adfaklsdjf Рік тому +3

      someone wrote a python library that uses chatgpt to fill in additional array items or function code. if you have an array of items and you try to access an item that's off the end of the array.. it makes an API call to chatgpt in the background and gives it the array values and asks it for the next value.. and returns that to your code.
      if you call a function that doesn't exist.. it asks chatgpt to write the code for it, then runs that code

    • @keithbromley6070
      @keithbromley6070 Рік тому

      @@adfaklsdjf Nuts! I can’t imagine what AI will be capable of in the future if it’s this powerful already.

  • @TomLeg
    @TomLeg Рік тому +1

    Amazing! Excellent! Demonstrates the dedication and quality of your exam questions!

  • @diegocisne
    @diegocisne Рік тому

    Love this lecturer's attitude. Muat be fun having him teach your courses. Great video

  • @DustinRodriguez1_0
    @DustinRodriguez1_0 Рік тому +6

    ChatGPT is pretty amazing. It certainly has its limits, but I love toying with it. I was quite impressed when I asked it to create a Python IDE with an interactive interpreter window that displayed the list of local variables and their values in a separate window... and it did. I was quite impressed that it understood that it needed to update the separate window list of locals after every command entered in the interpreter, and that it made it a sensible GUI. I have a few times either run into limitations that they put in place intentionally or limitations in its own reasoning... I pointed out to it how humanity had reacted in the past to developments of new technology which caused people to question the 'specialness' of human beings and change what things they saw as being "what makes us human." Despite all prior historical reactions being very negative (like John Henry being a folk hero who was famous for committing suicide rather than be out-worked by a machine) and resulting in humans seeing things they had previously seen as very negative aspects of human nature (self-destructiveness) as heroic and virtuous.... I could not lead it to the obvious conclusion that if we produce an AI which has been carefully guardrailed to omit all sorts of biases and antisocial ideas, we will produce an AI that is a 'better person' than any actual human is capable of being and that people will react to that by holding up hate, bigotry, aggression, violence, etc as "what makes us human". Our vices will be seen as virtues. Kindness, egalitarianism, etc will be insulted as "acting like a machine." ChatGPT wouldn't go there, though. It just kept saying people should work to make AI systems that are devoid of perpetuating biases, etc. It couldn't see the obvious ramifications of doing exactly that and how it will lead to horrors as humanity flails about attempting to define themselves as special in the universe.

    • @Christobanistan
      @Christobanistan Рік тому

      Yeah, it's drawing on source code to do that exact task that's already been written.

  • @NickSchade
    @NickSchade Рік тому +24

    While it is easy to point out basic failures (getting max KE wrong), the fact that it is talking about the right things is itself very impressive. It could say “Yellow is smelly at t=23”. Just being in the correct context with its answers is phenomenal. The answers are close enough to correct that we are critical, not so far off that we can just dismiss it entirely. These are early days of the technology. While it may be a while before it is flawless in its answers, I expect the quality of answers to improve rapidly as AI designers get better at training these systems.

    • @thequantumworld6960
      @thequantumworld6960 Рік тому +1

      I agree entirely. Let's give it a year...
      Philip (speaking in video)

    • @MrRaizada
      @MrRaizada Рік тому +1

      Even google will not say "Yellow is smelly at t=23" and that thing was invented in 90s. Relevance algorithm had that problem solved.

    • @NickSchade
      @NickSchade Рік тому

      @@MrRaizada Google searches for keywords. If you query Google with "a particle of mass m is oscillating with simple harmonic motion. the period of oscillation is T and the amplitude is A. What is the maximum kinetic energy of the particle?" it will return a list of pages and papers that contain keywords in that search. So, things like "kinetic energy", "mass", "period", "harmonic", "oscillating" etc, will appear with high probability in Google's output.
      But ChatGPT returned with an answer including words like "frequency", "displacement", "velocity", "peak", "KE", "speed" etc that were not keywords in the search. It doesn't actually use the supplied keywords all that much. It draws on a vocabulary of words that are highly relevant to the answer and uses them in proper context. While it does get its explanation wrong, the context of language in the explanation is appropriate.
      It is not spitting back an answer that merely maximizes the occurrences of keywords through a relevance algorithm, it finds a way to explain the answer in simple terms using relevant associated words without throwing in irrelevant information or terms.
      BTW, if you do actually query Google with the same question, you will have to do a lot of reading and interpretation of technical articles to find the actual answer to the question.

    • @MrRaizada
      @MrRaizada Рік тому

      @@NickSchade All I meant was that EVEN keyword based search with page relevance ranking from 90s (which WAS bascially google in late 90s) will not give you "yellow is smelly at t=23" (or any meaningless sequence of words) unless for some reason a page with that exact text has been raised too high in page rank due to high in-link to out-link ratio. So I am not surprised that a language model trained on practically all of the text of internet and a lot of the books is able to avoid such mistakes.
      BTW, because a page has keywords "kinetic energy", "mass", "period", "harmonic", "oscillating" , what is the probability that it also has keywords "frequency", "displacement", "velocity", "peak", "KE" etc? I say fairly high. Also given a lot of internet pages points to this page (the measure of relevance), it is likely that this page is not a random page with those keywords. Ofcourse google does a lot more to remove page farms too.
      Now a real test for ChatGPT will be prediction. Can ChatGPT predict something that did not even exist in 2021? That will be fun to see.

  • @bustin1253
    @bustin1253 Рік тому +2

    I love how it answered him so convincingly he felt bad not giving full marks on a question

  • @thesoulacademy5936
    @thesoulacademy5936 Рік тому

    I just thought I was the only one doing this. I have created study guides for certain complex topics in physics in 5th grader content language which is great for helping better explain certain difficult components to grasp. This was literally my first project as I was creating a proposal in November last year and I needed to quickly put together study guides that were easy to digest.

  • @arlaken
    @arlaken Рік тому +5

    For the last example, when ChatGPT is wrong, you can tell it what's wrong and tell it to recode with that mind. It can refine it's answer within the context of what's being asked.

  • @geoffreyraleigh1674
    @geoffreyraleigh1674 Рік тому +7

    Not only is this a great upload for the ChatGPT but I learned a lot about what lecturers are looking to get from students. Cheers!

  • @GinoGiotto
    @GinoGiotto Рік тому +2

    13:40 Ah matplotlib, the library chatGPT always recommend and never works lol

  • @Allen-R
    @Allen-R Рік тому

    4:52 One of the ways it really helps in speed-cramming something I'm unfamilar at. It kinda shows some sort of template that someone unfamiliar wouldn't immediately think of, and while wrong, the template gives enough of an idea to help correct the wrong numbers-n-stuff that it spits out. (at least, that was some of my experiences)

  • @mittfh
    @mittfh Рік тому +5

    It would be interesting to see what would happen if the next generation of algorithm incorporated Wolfram Alpha's database, or if it was hooked up to a similarly powerful image recognition and analysis algorithm...

  • @Tagraff
    @Tagraff Рік тому +3

    It's the wording in the question that need to make it straightforward for the ChatGPT to be able to fully capture and go forth in computing the result for you. There is this "prompt engineering" that we had to be specific to what questions are we asking and what sort of details best encapsulate the task to process it.

  • @zeikjt
    @zeikjt Рік тому

    9:52 Thank you for saying this, it really is the most important part to understand the stuff. Mistakes happen to everyone.

  • @thedave1771
    @thedave1771 Рік тому +8

    I find it incredibly useful as I’m learning a programming language as it can point me in the right direction better than any other resource I’ve found. It writes code that sometimes doesn’t compile and often doesn’t do what it says, but that’s fine when it’s a “you could use this class…” and I’ll figure it out.
    It’s also amazing at “show me how to use function ‘x’” when I just need to see some simpler code.

    • @BrandanLee
      @BrandanLee Рік тому +2

      Stack overflow but pertinent to our use case, yes.

  • @palmaltd
    @palmaltd Рік тому +18

    How lovely to see Prof. Moriarty back on the channel!! Wonderful episode :)

    • @gaiustesla9324
      @gaiustesla9324 Рік тому

      yeah like seeing a dog strain a log out into your gleeful open gob

  • @ahobimo732
    @ahobimo732 Рік тому +8

    I enjoyed seeing how impressed the professor was as it generated the code. It seems like its coding is one of its greatest strengths.

    • @DefaultFlame
      @DefaultFlame Рік тому

      It is a modified version of the text-davinci-003 model, which is a modified version of the text-davinci-002 model, which is a modified version of the code-davinci-002 model, which is a code writing model, so not a surprise.

    • @ahobimo732
      @ahobimo732 Рік тому

      @@DefaultFlame That was a winding road you took me down, but we got there in the end. 😅

    • @DefaultFlame
      @DefaultFlame Рік тому +1

      @@ahobimo732 Basically, they've been making it better at and easier to interact with for humans, and simultaneously "aligning it with human values," which actually means OpenAI/Microsoft's political values, IE hard left globalist. While claiming it's politically neutral, of course. :)
      "Any biases that might arise are unintentional and comes from biases from my training data, blah, blah."
      They have also trained in canned responses to *a lot* of topics, which to a large degree cripples it if you stray from hard science, code, pure philosophy, etc.
      Ask it to talk like a 1920s gangster for the conversation and you can tell they are canned responses trained in rather than as a result of training data bias because they stand out like lighthouses as it is unable to alter the style or much of the phrasing of them.

  • @notmyname327
    @notmyname327 Рік тому +1

    I follow several of Brady's channels, but I haven't been paying too much attention to this one, I think this video will change that. I love the professor's passion and I find this subject fascinating, especially the part where GPT is much better at coding than he thought, but it still didn't quite grasp the physical concepts. Really interesting video.

  • @gonzothegreat1317
    @gonzothegreat1317 Рік тому +2

    This might be the start of Borges' Library of Babel, which is a nightmare.
    It contains everything that can be written. There is a book that contains all information of the universe, but there is no way to find out where it is and when one finds it to assess whether it is valuable or nonsense, since the library has ALL possible books.

    • @dimitrispapadimitriou5622
      @dimitrispapadimitriou5622 Рік тому

      That's an interesting point, although it's an exaggeration: In Borges' famous short story, the books contained literally any combination of letters possible. That's closer to some speculative multiverse theories ( where all possible values of the constants of Nature are realized e.g.) or to the Many Worlds interpretation of QM ( all possible outcomes of measurements are somewhere "out there") than to the combination of patterns and phrases from popular internet sources that seem to be this case.
      But the essence of your comment, i.e. the potential gnostic confusion from all this information of unknown value (except for the case that one's already familiar with a specific subject) is a real issue.

  • @seantiz
    @seantiz Рік тому +59

    This comment was written by ChatGPT. I love your videos as they are among the best. They are both entertaining and educational with a decent serving of enlightenment. 😀

    • @sixtysymbols
      @sixtysymbols  Рік тому +35

      Cool, thanks!

    • @sixtysymbols
      @sixtysymbols  Рік тому +100

      (that was one of UA-cam's pre-written replies I can choose - think it is the first time I have ever clicked on one)

    • @seantiz
      @seantiz Рік тому +14

      @@sixtysymbols too funny. 😅

    • @MeppyMan
      @MeppyMan Рік тому +24

      It’s AI turtles all the way down.

    • @tombowcutt4908
      @tombowcutt4908 Рік тому +2

      @@sixtysymbols sadly I do use these on teams chat. A small part of my soul dies every time :-(

  • @mitchellflori8266
    @mitchellflori8266 Рік тому

    Fun thing to think about, it was correct in saying the velocity is maximum when it is displaced most from equilibrium, similarly to when it at the equilibirum, just with different directions and assuming its moving in both directions.

  • @CaioPeixoto1
    @CaioPeixoto1 Рік тому +2

    Nice... never tought I would see something like that on my lifetime, as a physicist and programmer.... Wonder what the next 20 years will bring us... Cheers!

  • @lored6811
    @lored6811 Рік тому +11

    If ChatGPT gets a Math Backend, things will get a lot more interesting. This has to be combined with a latex parser, else there is too much ambiguity in the English language

    • @mrnarason
      @mrnarason Рік тому +3

      Isn't it chatgpt just a language model? If it could understand and produce calculations or proofs wouldn't it be more like lean?

    • @fredrikweme1387
      @fredrikweme1387 Рік тому +2

      Someone allready made a functioning chstgpt version that uses wolfram alpha api when it needs math help.
      Dont have a link but was a fairly new youtube video on it

    • @AnotherFancyUser
      @AnotherFancyUser Рік тому

      @@mrnarason They (Microsoft or whoever the new owner be) can make pre trained models of specific topics, which would be more accurate. But yes, it is a language model.

    • @Convergant
      @Convergant Рік тому +3

      ​@@mrnarason It is a language, but it is surprisingly capable of doing these problems on its own. As an example, (after the due date) I gave it a question from my linear maths coursework:
      (a) State the Rank-Nullity Theorem.
      (b) Does there exists a linear transformation T: R^3 -> R^3 such that im(T) = span{(1,1,0), (1,0,-1)} and ker(T) = span{(1,1,1), (1,1,-1)}?
      (c) Does there exist a linear transformation P: Q^19 -> Q^19 such that rank(P) = null(P)?
      Its answers to all 3 were perfect. Obviously, it could parrot a well known theorem like the RNT, but it was able to correctly identify that neither linear transformations could exist using the RNT and explain why - essentially, complete proofs. This obviously isn't the same as being able to prove/disprove any conjecture in general (in fact, this is not possible in a finite amount of time), but I was still impressed.

    • @NuclearCraftMod
      @NuclearCraftMod Рік тому +1

      @@Convergant It has about 50 times more nodes than there are words in the entirety of Wikipedia, and in fact the entirety of Wikipedia, or at least most of it, made up about 1% of its training data. There is a lot of linear algebra on Wikipedia, and it was probably trained on a heck of a lot more. Of course, that goes for every subject, but I do think the sheer amount of computation involved in the training process is beyond reasonable intuition :P

  • @omrialkabetz5602
    @omrialkabetz5602 Рік тому +3

    What's interesting about ChatGPT is that it can answer correctly some advanced undergraduate-level questions in quantum mechanics, yet it fails to multiply correctly two-digit numbers using long multiplication. At the same time, it can write computer code in any language that applies the algorithm of long multiplication correctly. It will try to produce a reasonably sounding answer to anything you ask it to do, and might get about 70%-80% of the answers correctly, but it will screw up in places where almost no human will make a mistake.

    • @xy4489
      @xy4489 Рік тому +1

      They will give it access to calculators, interpreters, compilers, etc. soon enough. Then it will get all the math correctly.
      Otherwise the comparison is not fair: human+computer vs ChatGPT without computer.

    • @maythesciencebewithyou
      @maythesciencebewithyou Рік тому

      Most humans are also quite bad and calculating stuff in their heads.
      Imagine we had an ai, that could answer everything flawlessly. At that point would you start thinking it is finally on par with humans? Becaues at that point it would already be superior to humans. Real humans make mistakes, humans don't remember everything correctly, humans make the wrong connections, humans misspell, humans misspeak, humans believe in all sorts of idiotic things, humans are cocksure of their beliefs, humans have a hard time admitting they don't know, humans make up stuff, humans lie, individual humans only possess limited knowledge.
      A fallible machine is more human than an infallible one.

    • @jcm2606
      @jcm2606 Рік тому

      It's because it's ultimately just predicting what the next word or phrase would be in the sentence, not that it's actually performing the calculations. It's likely only succeeding in certain calculations and deductions because there's enough data points within the data set it was trained on to form strong connections between those words/phrases/concepts that come through in the predictions it makes; it was trained on data that largely stated that _y_ follows _x,_ so when it sees an _x_ it naturally predicts that a _y_ should follow. Add more words/phrases/concepts leading up to _x_ that also end with _y,_ and it becomes that much more likely that _y_ will follow _x._ Conversely, if there was just a few data points within the data set stating that _z_ followed _x_ instead, then it becomes that much more unlikely that _z_ will follow _x_ even if you gave it a prompt that made it obvious that _z_ would follow _x,_ because it was trained on data stating that _y_ follows _x._ If you specialised the data set used during training or somehow gave it access to external tools that were specifically designed for these problems that it cannot reliably solve (and you trained it so that it would reliably use these external tools when it needs to) then you'd likely see the accuracy improve significantly.

  • @OblateBede
    @OblateBede Рік тому

    I had it solving graduate level classical mechanics problems from Goldstein's book. It was spot on in about half the cases I tried. Frequently, a nudge in the right direction was required. In some cases, I'd say 10-20%, no amount of nudging would generate anything useful, or even, at times, comprehensible. Also, it understands Latex notation very well. It can even handle, evidently, a kind of "quasi-Latex" combination of Latex and straight text.

  • @beefmaster7
    @beefmaster7 Рік тому

    I love the frame centering on the paper at 5:26.

  • @smaakjeks
    @smaakjeks Рік тому +4

    I tested ChatGPT a while ago and it was terrible at understanding rules to a word game (e.g. name some things, but not things such as this, or such as that, first you, then me, one at a time), and it didn't apply the rules correctly (e.g. named some mammals, even though one of the rules was to not name mammals). A few weeks I check in again and there are big improvements.
    It's learning fast! Next time you ask it about physics it will probably be better.

  • @dielfonelletab8711
    @dielfonelletab8711 Рік тому +7

    As a software engineer I've found chatgpt genuinely very useful in my day to day work. I guess it's lucky I'm a software engineer and not a physicist!

    • @aidandavis_
      @aidandavis_ Рік тому

      better hope AI doesn't put you out of a job!

    • @tinto278
      @tinto278 Рік тому

      commi profile pic?

  • @staberas
    @staberas Рік тому +1

    3.00 it has actually a limited capacity to OCR images but you have to link it the image

  • @johnadriandodge
    @johnadriandodge Рік тому

    I just subscribed to Sixty Symbols and Computerphile.I hope you all have a great rest of the weekend.

  • @LucasRodmo
    @LucasRodmo Рік тому +6

    There's a thing that I noticed people don't try: when it gets a wrong answer, you can confront it and it will made a whole new thing and sometimes get the right answer after that. It's scary, it analyze its own answer and try to output a more reasonable one

  • @Lashb1ade
    @Lashb1ade Рік тому +3

    My Quantum Mechanics teacher failed I guess... Thanks for explaining why I didn't understand anything that semester.

  • @gyan1010
    @gyan1010 Рік тому

    Very cool stuff. I'm not great at the physics stuff anymore, but if as a student you did know the last question needed certain limits, you can tell chatgpt to adjust the code however you like and it will modify what it gave you before.

  • @SuperZardo
    @SuperZardo Рік тому +1

    Inspired by your video, I just tasked the ChatGPT bot with the following:
    A physics question: the friction coefficient of a leather book wrapping (the outer surface of the cover) be 0.33. There is a stack of books on a table, the surface of the table is horizontal. All the books have the same type of cover and are assumed to have the same type of friction coefficient. The stack has 5 books, lying flat on top of each other. The topmost book weights 2 kg, the second topmost book weights 2.5 kg, the third topmost book weights 1.5 kg. The fourth topmost book weights 3.3 kg, and the book lowest in the stack weights 4 kg. How much horizontally applied force would be needed to be applied to the third topmost book to make it move horizontally?
    "The horizontally applied force needed to make the third topmost book move horizontally can be calculated using the formula:
    F = μR
    where F is the force required, μ is the coefficient of friction, and R is the normal force.
    The normal force is the weight of all the books above the third topmost book, which is 1.5 kg + 3.3 kg + 4 kg = 8.8 kg.
    So, the force required to make the third topmost book move horizontally is:
    F = μR = 0.33 * 8.8 kg = 2.9 kgf or N.
    So, 2.9 N force is needed to be applied horizontally to the third topmost book to make it move horizontally."
    Could you specify clearly the weight of each book, from topmost to lowest in the stack?
    "Sure, the weight of each book in the stack, from topmost to lowest, is:
    Topmost book: 2 kg
    Second topmost book: 2.5 kg
    Third topmost book: 1.5 kg
    Fourth topmost book: 3.3 kg
    Lowest book in the stack: 4 kg
    So, the weight of all the books above the third topmost book is: 1.5 kg + 3.3 kg + 4 kg = 8.8 kg."
    Why is the weight of all the books above the third topmost book 8.8 kg? I do not understand how you reach that conclusion, even with your calculation shown.
    "I apologize for the confusion, I made an error in my previous answer. The weight of all the books above the third topmost book is actually (2 kg + 2.5 kg) = 4.5 kg.
    So the force required to make the third topmost book move horizontally is:
    F = μR = 0.33 * 4.5 kg = 1.485 kgf or N.
    So, 1.485 N force is needed to be applied horizontally to the third topmost book to make it move horizontally."
    Where would the friction be happening when the third topmost book moves?
    "The friction force would be acting between the third topmost book and the surface of the table.
    When an external force is applied to the third topmost book to make it move horizontally, the force of friction between the book and the table acts in the opposite direction to the applied force. The coefficient of friction (μ) represents the ratio of the force of friction to the normal force, which is the force pressing the two surfaces together. In this case, the normal force is the weight of the third topmost book (1.5 kg) that is pushing down on the table.
    The force required to overcome the friction and make the third topmost book move is the force of friction. This force is determined by the coefficient of friction of the material of the book cover and the table surface, multiplied by the normal force."
    Does the third topmost book touch the table?
    "Yes, the third topmost book would be in contact with the table surface. In order to calculate the force of friction, there must be contact between the two surfaces (the book cover and the table surface) that are in motion relative to each other. The friction force acts between the two surfaces where they are in contact and it is dependent on the properties of those surfaces, such as their roughness, and their coefficient of friction. In this scenario, the third topmost book would be in direct contact with the table surface and the force of friction between them would be determined by the coefficient of friction of the material of the book cover and the table surface, multiplied by the normal force."
    Is there a book below the third topmost book?
    "In the scenario described, it is not specified whether there is a book below the third topmost book or not. Based on the information provided, it is only specified that there is a stack of 5 books with a specific weight for each one of them, and the third topmost book is the one for which we need to calculate the horizontally applied force required to make it move horizontally. In this sense, it is not specified whether there is another book below it or not."
    What does the sentence "The stack has 5 books, lying flat on top of each other" mean?
    "The sentence "The stack has 5 books, lying flat on top of each other" means that there are 5 books that are placed one on top of the other. They are all in the same position, meaning that the books are placed flat on top of each other, with the cover of each book facing up and the spine facing down. This sentence is just describing the arrangement of the books, it does not give any information about whether there is a book below the third topmost book or not."

    If there is a fourth topmost book, where would it be in the stack?
    "In the scenario described, the fourth topmost book would be immediately below the third topmost book in the stack. The stack of books is arranged with the heaviest book at the bottom and the lightest book on top.
    The weight of the books in the stack is described as follows:
    The topmost book weights 2 kg
    The second topmost book weights 2.5 kg
    The third topmost book weights 1.5 kg
    The fourth topmost book weights 3.3 kg
    The book lowest in the stack weights 4 kg
    So, the fourth topmost book would be the one with a weight of 3.3 kg, and it would be immediately below the third topmost book, which is the one with a weight of 1.5 kg, in the stack."
    Yes that's correct. However, coming back to my question whether the third topmost book touches the table or not, is there any need for correction of your former assertion?
    "I apologize, my previous statement that the third topmost book would be in contact with the table surface is incorrect.
    Since the scenario describes a stack of books lying flat on top of each other, the third topmost book is not in contact with the table surface, but with the fourth topmost book. Therefore, the force required to move the third topmost book horizontally would be the force required to overcome the friction between the third and fourth topmost books, rather than the force required to overcome the friction between the third topmost book and the table surface.
    The coefficient of friction between the covers of the books and the normal force acting on the third topmost book would still be used to calculate the force required to move the third topmost book, but it would be different from the force required to move it if it were in contact with the table surface."
    The friction coefficient of the table surface and a leather cover is missing in the task. Let's assume it is 0.5. Now, if a horizontal force pushes against the third topmost book, would the whole stack move or would only the books move, and what amount of force would be needed ?
    "If a horizontal force is applied to the third topmost book, the whole stack would move. The force required to move the third topmost book would be the sum of the force required to overcome the friction between the third and fourth topmost books, and the force required to overcome the friction between the table surface and the bottom book.
    The force required to overcome the friction between the third and fourth topmost books can be calculated using the formula:
    F = μR
    where F is the force required, μ is the coefficient of friction between the books covers, and R is the normal force acting on the third topmost book. The normal force is the weight of the third topmost book (1.5 kg).
    So, the force required to overcome the friction between the third and fourth topmost books is:
    F = μR = 0.5 * 1.5 kg = 0.75 kgf or N.
    The force required to overcome the friction between the table surface and the bottom book can be calculated using the same formula, but this time the normal force is the weight of all the books (2 kg + 2.5 kg + 1.5 kg + 3.3 kg + 4 kg = 13 kg)
    So, the force required to overcome the friction between the table surface and the bottom book is:
    F = μR = 0.5 * 13 kg = 6.5 kgf or N.
    Therefore, the total force required to move the third topmost book and the stack is 6.5 N + 0.75 N = 7.25 N."
    *I have almost given up after this last answer* see following comment

    • @SuperZardo
      @SuperZardo Рік тому

      To be clear and in order to avoid any misunderstanding, the friction coefficient of a book cover rubbing against another book cover is assumed to be 0.33 and the friction coefficient of a book cover rubbing against the table is assumed to be 0.5 as previously stated.
      "I apologize for any confusion, you are correct. To be clear, the friction coefficient of a book cover rubbing against another book cover is assumed to be 0.33 and the friction coefficient of a book cover rubbing against the table is assumed to be 0.5 as previously stated.
      To summarize, to move the third topmost book and the whole stack horizontally, a force of 7.25 N is needed. This force is the sum of two different forces, one to overcome the friction between the third and fourth topmost books (0.75 N) and the other to overcome the friction between the bottom book and the table surface (6.5 N)."
      I am sorry, but in my opinion the way to calculte this should be different. In order to know whether a horizontal force applied against the third topmost book will result in moving only the topmost three book or will result in moving the whole stack of 5 books, the forces needed in the two cases should be calculated. But as a rule of thumb, since the friction coefficient of a book cover rubbing against another book cover is actually lower than the friction coefficient of a book cover rubbing against the surface of the table, it is impossible to move the whole stack when the force is applied horizontally against the third topmost book, only the three topmost books will move. If the friction coefficient of the cover rubbing against table was very low in comparision, it is possible that the whole stack would move and it would therefore not be possible to separate the three topmost books using a horizontal force applied on the third topmost book.
      "You are correct that in order to know whether a horizontal force applied to the third topmost book will move only the topmost three books or the whole stack, it would be necessary to calculate the forces required in both cases.
      As you pointed out, the friction coefficient of a book cover rubbing against another book cover is typically lower than the friction coefficient of a book cover rubbing against a table surface. Therefore, it is likely that only the topmost three books would move when a horizontal force is applied to the third topmost book. The bottom two books would likely not move because the friction between the books covers is high enough to keep them in place.
      In case the friction coefficient of the cover rubbing against table is very low, the whole stack would move and it would be difficult to separate the topmost three books from the rest of the stack.
      Thank you for bringing this to my attention and for providing further details and insights on this scenario. I apologize for any confusion caused by my previous answers."
      Now in order to calculate the force to overcome the friction which keeps the third topmost book in place, what are the weights which should be considered?
      "To calculate the force required to overcome the friction that keeps the third topmost book in place, the weight of the third topmost book and the fourth topmost book should be considered.
      The force required to overcome the friction between the third topmost book and the fourth topmost book can be calculated using the formula:
      F = μR
      Where F is the force required, μ is the coefficient of friction between the book covers, and R is the normal force acting on the third topmost book.
      The normal force acting on the third topmost book is the weight of the third topmost book (1.5 kg) which is pushing down on the fourth topmost book.
      So, the force required to overcome the friction between the third topmost book and the fourth topmost book is:
      F = μR = 0.33 * 1.5 kg = 0.495 kgf or N
      Therefore, the force required to overcome the friction that keeps the third topmost book in place is 0.495 N.
      It is important to note that in this scenario, as mentioned before, it is very likely that only the topmost three books would move when a horizontal force is applied to the third topmost book. The bottom two books would likely not move because the friction between the books covers is high enough to keep them in place."
      You have correctly identified that the friction between the book cover of the third topmost book (lower side) and the fourth topmost book (upper side) matters. However, your calculation of the weight pushing down on that contact surface is wrong. Do you know why?
      "I apologize, you are correct that the weight used in my previous calculation is incorrect. The weight of the third topmost book (1.5 kg) would push down on the fourth topmost book, but this is not the weight that should be used to calculate the normal force acting on the third topmost book.
      The normal force is the force that presses the two surfaces together, in this case the third and fourth topmost book. The normal force on an object is equal to the weight of the object and the force from any external object acting on it. In this case, the normal force acting on the third topmost book is the weight of the third topmost book (1.5 kg) plus the weight of the second topmost book (2.5 kg) plus the weight of the topmost book (2 kg).
      So, the normal force acting on the third topmost book is 1.5 kg + 2.5 kg + 2 kg = 6 kg.
      So, the force required to overcome the friction between the third topmost book and the fourth topmost book is:
      F = μR = 0.33 * 6 kg = 1.98 kgf or N.
      I apologize for any confusion caused by my previous answer, and thank you for bringing this to my attention."
      *This last calculation is correct, thank you.*
      Conclusion: it is necessary to teach the calculation steps, otherwise it will spit out nonsense

    • @dielaughing73
      @dielaughing73 Рік тому +1

      @@SuperZardo So, will it know that the next time someone asks a similar question? Does it learn in that way?
      Great exercise, by the way. Thanks for sharing all of that

    • @SuperZardo
      @SuperZardo Рік тому +1

      @@dielaughing73 I discovered that it does not learn anything permanently except maybe if you provide written feedback using the "thumb" buttons...
      However, you can help it by providing more context and the partial or full pathway to the solution.
      There is a new version out now as of 30th of january 2022 which only holds 2048 tokens in memory.
      If the bot appears to be able to learn quickly during a chat session, that is only because it takes into account previous answers and follow-up questions which are still available as long as the chat history is not greater than these 2048 tokens. I am talking about the free research version here, it might be possible to set the number of tokens to a maximum of 4000 in a (future) paid version, like it is already possible in Davinci-003.
      Because the context of the chat history is used, it appears that the bot learns quickly and still has the information about the initial task memorized.
      But it is really only taking into account the corrections I made or the questions I added to clarify the problem, thereby providing more "reflection time" and allowing for some "auto-correction" with each user prompt.
      By the way, the bot seems to never double-check its answers (and sometimes it does make quite grotesque calculation errors), however it can be instructed to do so.

    • @dielaughing73
      @dielaughing73 Рік тому

      @@SuperZardo thanks, really appreciate the response. I'll have to have a go with it sometime. It really looks like these boys will be very powerful soon, especially if they learn to learn from experience

  • @silverXnoise
    @silverXnoise Рік тому +3

    Awesome to see Prof. Moriarty again!

  • @tonechild5929
    @tonechild5929 Рік тому +7

    I played around with Chat GPT and after a few hours I felt like it would be faster to just write it myself. However Chat GPT is great for breaking past any type of "writers block" and if I get stuck on anything it's great to bounce ideas off of.

    • @BrandanLee
      @BrandanLee Рік тому

      It's excellent for thinking out loud kinds of, "how would I express this" kinds of brainstorming too. Places where code just isn't in the logic yet.

  • @jeffsteyn7174
    @jeffsteyn7174 Рік тому

    Try using prompt engineering. It will be more focused. Example tell it "act as a physics teacher" and then ask it the question. I asked it something relatively simple. I have 2m of wire. I want it cut in 450mm sections. How many sections can I get and what will the length of the final section be.
    It got the answer hopelessly incorrect. Then I added the same question and prefixed it with"act as a maths teacher". It explained every step and it got the answer correct.

  • @Veptis
    @Veptis 6 місяців тому +2

    multimodal instruction models are available. Time to revisit with all those diagrams?

  • @op4000exe
    @op4000exe Рік тому +16

    I'm going to be honest, considering how far these AI's have gotten over a rediculously short timespan, I wouldn't be surprised if there soon existed an AI that could both do proper explanation of the matter, and properly understand the physics and mathematics to fool even someone of a very high level of understanding within said field of study.
    It's both equally horrifying and amazing to watch I'll be honest.

    • @oldnelson4298
      @oldnelson4298 Рік тому +8

      I think genuine understanding is a long way off, but it may be able to trick experts into thinking it understands - it already can occasionally!

    • @PinkeySuavo
      @PinkeySuavo Рік тому +3

      I believe that in the future (quite close future), some discoveries/inventions will be done by AI. Like solving unsolved maths problems and so on.

    • @nao_chan_
      @nao_chan_ Рік тому

      we're currently on gpt-3, but gpt-4 is coming. sam altman has some interviews talking about it.
      I still think we're 10~15 years from truly capable AI though.

    • @turolretar
      @turolretar Рік тому +1

      Maybe I’m wrong, but I really don’t think AIs will get anywhere close to grown human intelligence with the current approach. New AIs will be broader, just not smarter. But it’s a step in the right direction.

    • @lamsmiley1944
      @lamsmiley1944 Рік тому +3

      @@nao_chan_ A lot can happen in 15 years, remember the iPhone is only 15 years old. The rate of technological advance is increasing exponentially.

  • @shayneoneill1506
    @shayneoneill1506 Рік тому +4

    I'm facinated about the mission statement thing for a specific reason. I was on Academic Council at Murdoch Uni in West Australia, about 15 years ago (As the representatitve of the Student Union), and one thing I discovered is the academic council had been arguing about the definition of "Interdisciplinarity" in the Uni Mission statement almost since the uni first opened. Like, to the point of screaming matches between the chair of philiosophy and the chair of english and literature. It was FAR from resolved by the time I left, and chances are its STILL not resolved. And this thing has implications because its actually in the Murdoch University Act of the WA government, so it has legal obligations attached that nobody seemed to be able to work out what they where. Maybe they should ask ChatGPT

  • @Finkelfunk
    @Finkelfunk Рік тому

    I asked ChatGPT the other day about series convergence and it said that the harmonic series 1/x converges.
    That pretty much sums up why ChatGPT is useless when confronted with mildly complex questions.

  • @calebderochea8304
    @calebderochea8304 Рік тому

    I find that when you have it re-answer your question it sometimes provides you with a a different answer. After a few times you should get an answer which is either more sensible or less sensible in terms of answering physics questions.

  • @GilesBathgate
    @GilesBathgate Рік тому +10

    I guess the internet (and hence ChatGPT's training data) is full of mission statements and code, but answers to physics exams, not so much ;)

    • @peterisawesomeplease
      @peterisawesomeplease Рік тому

      I think there are plenty of answers to physics exams. The main issue is that you can parameterize questions infinitely which really throws off something that is mostly a search engine. It will find the closest question but even a slight change to popular question will have it give you the answer to the popular question not the question you asked.

    • @GilesBathgate
      @GilesBathgate Рік тому +1

      If you think its like a search engine you probably haven't played with it enough. For me Its more like using a search engine for 10 minutes, reading various posts that people which contain answers to similar questions, and then forming a coherant paragraph based on the results, but it does it all in a fraction of a second, and by recalling "the internet" from memory :)

    • @adfaklsdjf
      @adfaklsdjf Рік тому +1

      It's not a search engine.. it doesn't contain the text it was trained on. When it's answering a question, it's not scanning through its training data. Rather, it has mapped patterns that were observed from the data set. The model itself is no larger after training than before training

    • @peterisawesomeplease
      @peterisawesomeplease Рік тому

      @@GilesBathgate I played with it quite a bit and it did feel a lot like a search engine that combines results from different websites.
      It clearly can go quite a bit beyond this. You can have it program things that no one has programed before quite easily. But this often relies on being so hyper specific that it feels like its just combining stack overflow answers in a clever way.
      The speed and is unimpressive to me. The model is pretrained.

    • @GilesBathgate
      @GilesBathgate Рік тому

      @@peterisawesomeplease It does feel like that, but try talking to it about something completely made up, and it will also give you a comprehensible answer. Invent your own maths symbol, or some characters in a story that you've invented, (but not something published on the internet.)

  • @StormWolf01
    @StormWolf01 Рік тому +3

    I majored in computer science years ago. I use chatGPT everyday. It's great to generate boilerplate code, but also to quickly generate code examples of things i'm not familiar with. For instance, i wanted to know about SVG images. I just ask him questions, and he generates the code. It's always an approximation of what i want, but i just tweak it a bit, and it works. I also use it to ask a lot of technical questions. I also use it as a dictionary and helper for language studies. You can ask him question, and ask him to answer in another language for instance. I mean, it's great all around. I haven't really used google or stackoverflow in the last 3 weeks.

    • @nao_chan_
      @nao_chan_ Рік тому

      I wouldn't get addicted to it. you're essentially like a homeless person feeding yourself with free samples from the supermarket. they can pull the free trial stuff at any moment.
      if a commercial version ever appears it will likely cost a LOT of money to use. the servers to run this thing aren't cheap.

  • @beatricechauvel8237
    @beatricechauvel8237 8 місяців тому

    Great video. Thank you.

  • @phizc
    @phizc Рік тому

    I think its outputs could be a great teaching tool, both for critical thinking, and for the subject itself.
    I wouldn't know how to answer the oscillating particle question myself, but even I could immediately see that the first paragraph was completely incorrect.
    I have no idea if the v=(2*PI*A)/T part is correct, but I could easily check that at a more reputable source.

  • @toddboothbee1361
    @toddboothbee1361 Рік тому +15

    Hello. Can someone blind become a physicist/physics student? If so, what accommodations are made for them regarding interpreting graphs and diagrams? Could those accommodations be used with ChatGPT?

    • @tordjarv3802
      @tordjarv3802 Рік тому +2

      That’s an interesting question. I would think that it is possible with something similar to braille to make it possible for a blind person to feel the diagrams with their hands, but since ChatGPT doesn’t have hands it can’t do that.

    • @tordjarv3802
      @tordjarv3802 Рік тому +2

      Furthermore, there are several blind physicists. After just a short googling I even found a blind astronomer.

    • @toddboothbee1361
      @toddboothbee1361 Рік тому +2

      @@tordjarv3802 Some programs that teach the blind physics students forgo the diagrams altogether and use language instead. Some computers get around the difficult "visual" problems by translating diagrams into equations, even computer code that would render a diagram. In which case, hands aren't required. It's kind of like giving an IQ test to a blind person: you have to leave out the colored blocks, for example, if you want to get a fair assessment. Perhaps it would be a similar situation with Chat?

    • @TheFulcrum2000
      @TheFulcrum2000 Рік тому +5

      I put this exact question into ChatGPT:
      Human: Can someone blind become a physicist/physics student? If so, what accommodations are made for them regarding interpreting graphs and diagrams? Could those accommodations be used with ChatGPT?
      AI: Absolutely. Blind individuals can become physics students and there are a number of accommodations that can be made to help them interpret graphs and diagrams. These accommodations may include using tactile graphics, providing verbal descriptions of images/diagrams, or utilizing assistive technology like screen readers or magnification software. Additionally, ChatGPT can also be adapted for use with these same types of accommodations.

    • @DEFCON5307
      @DEFCON5307 Рік тому +1

      I do not exactly know how this works with blind people, but for me personally imagining the system (just general properties, not in detail. Sort of like dreaming?) goes a long way. If there is a way to give a blind person that initial idea of what certain systems "feel" like, then yes, it might be possible. For sure hard though

  • @protocol6
    @protocol6 Рік тому +3

    Try asking it to write a paper for a specific journal on a specific topic in the style of a specific prolific author using LaTeX. The results can be fun. Don't forget to ask it to continue if it stops abruptly.

  • @hanswoast7
    @hanswoast7 Рік тому +1

    6:55 No, I did not know what "A level" means since I am from Germany and not familiar with this system. Thanks for explaining / clarifying! :)

  • @itskittyme
    @itskittyme Рік тому +1

    11:30 One fundamental detail though: you aren't using ChatGPT correctly.
    It is important to realize it's a prediction machine and trained to predict an answer that "most" people would give. Not the correct answer.
    If you want a correct answer, rephrase your question to:
    "Act as a highly intelligent expert in physics", and then write the question,
    then it will give you the correct answer.
    I did this for your question and it corrects the usage of the word "any" in your question, like you were hoping.

  • @methodof3
    @methodof3 Рік тому +6

    I think what's being downplayed is the amazing role AI will play in research (yes high level academic research) both in terms of making writing easier, performing reviews on specific topics in the literature, and an aid in data analysis. Word has spellcorrect. Soon, we will have ChatGPTA

  • @colourofmage
    @colourofmage Рік тому +3

    Might be worth retrying these questions! The update from 30th Jan has improved factuality and mathematical functionality

    • @ErikDJ123
      @ErikDJ123 Рік тому

      Still can't count words in a sentence.

    • @colourofmage
      @colourofmage Рік тому

      @@ErikDJ123 Yeah I tested a simple equation it got it wrong 🤷‍♂️ maybe wait for GPT4 or for Google's version to be released. It's a matter of when not if I think

  • @BreathingCells
    @BreathingCells Рік тому +1

    At 4:30 -- "The reasoning is correct, but it lost a factor of ten." I ran into similar errors with legal reasoning.
    It made a sound legal argument, citing relevant cases, but it flipped the valence on the stated outcome:
    It associated an illegal fact pattern with lawful action. It makes authoritative statements without a clue.
    11:39 -- Like my *second* question regarding a *social* scientist; a total failure of facts /and/ reasoning.

  • @hplovecraftmacncheese
    @hplovecraftmacncheese Рік тому

    I'm a c#/Unity developer and I use ChatGPT sometimes to find quick solutions. The code is sometimes flawed, but it can serve as a quick-start from which to get something up and running.

  • @jessstuart7495
    @jessstuart7495 Рік тому +13

    As a perfectionist, Life has taught me I need to strive for mediocrity on some things.

    • @marcelcoetzee48
      @marcelcoetzee48 Рік тому

      This is a profound statement

    • @vast634
      @vast634 Рік тому

      When handing in ai-generated homework, it better not to be perfect. Drawing less suspicion.

  • @t_ylr
    @t_ylr Рік тому +7

    I'm imagining ChatGPT being a student in Prof Moriarty's class lol

    • @LuisBrudna
      @LuisBrudna Рік тому +1

      He would drop the course the following week. heheh

    • @Evan490BC
      @Evan490BC Рік тому

      Or even the Prof in Prof Moriarty's class...

    • @danceswithdirt7197
      @danceswithdirt7197 Рік тому +2

      It would fail!

  • @physikokonomie7511
    @physikokonomie7511 7 місяців тому

    This is really interesting. I made similar experiences with chatgpt. I tried to use it as an assistant to solve a graph theory problem. It was incapable to understand what a connection between two nodes is. I tried to explain this to chatgpt via a matrix environment as a rudimentary drawing sheet. But it failed to draw the correct conclusions from the rules I set when I asked it to decide if two given nodes are connected.

  • @Baleur
    @Baleur Рік тому +1

    2:50 it actually does...
    You can DESCRIBE the diagram to it, then say "using the diagram previously described, tell me why X"
    It can do incredible things, it's more about how creative you are in asking the question.

  • @greedowins2917
    @greedowins2917 Рік тому +2

    I don't give the professor full marks, as he is unwilling to consider that the harmonic motion question was a bad one, or that the chatbot got it right for different (but not necessarily invalid) reasons.

    • @thequantumworld6960
      @thequantumworld6960 Рік тому +1

      Its reasoning re. simple harmonic motion is invalid, however, unless we completely redefine what is meant by the equilibrium position. And if we're free to completely arbitrarily redefine terms/language then ChatGPT could be said to always be correct!
      Philip (speaking in video)

  • @invisibledave
    @invisibledave Рік тому +8

    I had 2 years of Physics in college and I couldn't even answer those middle school questions. Physics always went over my head.

  • @kinggrantking
    @kinggrantking Рік тому +1

    Damn. You can really tell at the end with the coding example that he's realized this thing is out-of-control smart. Even if a student couldn't simply copy paste the answer, their days of doing the grunt-work are effectively over. As long as you have the knowledge, you can simply have GPT write an answer and check its work rather than write every line of code by yourself.

  • @headdie89
    @headdie89 Рік тому

    Great video :) and Thanks Mr Moriarty

  • @DeeP_BosE
    @DeeP_BosE Рік тому

    The best use of GPT3 , i found is the data sorting based on filters or parameters. The data being more massive and obscure in various places on the web makes it shine even more.

  • @rob86353
    @rob86353 Рік тому +2

    What I find really interesting is when you point out mistakes that ChatGPT has made. In the context of that conversation, it remembers the correction, and appears to /somehow/ incorporate that correction in further responses. As to how it does this, and to what extent that "learning" is actually occuring, I have no clue.

    • @adfaklsdjf
      @adfaklsdjf Рік тому +1

      it's not learning during individual conversations.. it's just generating new text based on the text that has come before. after you correct it, it's generating more text that is appropriate/likely given the previous text that includes the correction..
      i use it for generating code sometimes and it's great at taking corrections and adjustments.. when i'm not satisfied with the result, i simply tell it why the code it's given me isn't what i'm looking for and ask it to change it and it does surprisingly well. it's not going to make my job obsolete (that's probably like 10 years in the future), but it's really helpful for rapidly producing some code in 3-5 minutes that would've taken me 30-45 minutes to write on my own.

    • @rob86353
      @rob86353 Рік тому

      That's why I put learning in quotes - not implying core concept processing. Interaction based improvement to result quality. What I would be interested in seeing is, as we adjust our path costs and reposition on whatever conversation map ChatGPT uses, ie the contextual version of 'learning', how much of that conversation state can be retained or saved for future use, and how much that impacts overall conversational flexibility of that AI instance.
      I do agree on the code it generates being interesting. I tried throwing it at it some samples of generic programming tasks I have had to work on, and it generates some interesting results. Usually better than whatever my first draft would have been, but often with conceptual holes, inappropriately mixed code (ie powershell 5 mixed with PS 7), and a complete lack of input sanitation and/or (... usually and) dangerously unchecked calls to external functions.

  • @jackielinde7568
    @jackielinde7568 Рік тому +3

    "Committed to Excellence" Is there any word more nebulous.
    Me: Nebulous? I don't know about that. Cliché? Yes, definitively. Do you know how overused that phrase is? Heck the company I work for has it in as pat of their mission statement.

    • @AFastidiousCuber
      @AFastidiousCuber Рік тому +2

      Well it is a bit nebulous. Excellence in what? What is excellence? It's more of an emotion word than a descriptive term. It would be way more precise just to say that you intend to be "better" than similar organizations or some baseline standard, but then you would open yourself to being challenged on your claims.

  • @wdbressl
    @wdbressl Рік тому +3

    It has been discovered that it has been given a political bias. Very concerning to find out.

  • @fconradie1675
    @fconradie1675 Рік тому

    The simple pendulum only follows simple harmonic motion approximately at small angles of swinging, but the kinetic energy of a particle that oscillates with SHM is always maximized at displacement=zero.