Biggest Breakthroughs in Computer Science: 2023

Поділитися
Вставка
  • Опубліковано 5 вер 2024
  • Quanta Magazine’s computer science coverage in 2023 included progress on new approaches to artificial intelligence, a fundamental advance on a seminal quantum computing algorithm, and emergent behavior in large language models.
    Read about more breakthroughs from 2023 at Quanta Magazine: www.quantamaga...
    00:05 Vector-Driven AI
    As powerful as AI has become, the artificial neural networks that underpin most modern systems share two flaws: They require tremendous resources to train and operate, and it’s too easy for them to become inscrutable black boxes. Researchers have developed a new approach called hyperdimensional computing which is more versatile, making its computations far more efficient while also giving researchers greater insight into the model’s reasoning.
    - Original story with links to research papers can be found here: www.quantamaga...
    04:01 Improving the Quantum Standard
    For decades, Shor’s algorithm has been the paragon of the power of quantum computers. This set of instructions allows a machine that can exploit the quirks of quantum physics to break large numbers into their prime factors much faster than a regular, classical computer - potentially laying waste to much of the internet’s security systems. In August, a computer scientist developed an even faster variation of Shor’s algorithm, the first significant improvement since its invention.
    - Original story with links to research papers can be found here: www.quantamaga...
    07:14 The Powers of Large Language Models
    Get enough stuff together, and you might be surprised by what can happen. This year, scientists found so-called “emergent behaviors,” in large language models - AI programs trained on enormous collections of text to produce humanlike writing. After these models reach a certain size, they can suddenly do unexpected things that smaller models can’t, such as solving certain math problems.
    - Original story with links to research papers can be found here: www.quantamaga...
    - VISIT our Website: www.quantamaga...
    - LIKE us on Facebook: / quantanews
    - FOLLOW us Twitter: / quantamagazine
    Quanta Magazine is an editorially independent publication supported by the Simons Foundation: www.simonsfoun...

КОМЕНТАРІ • 393

  • @MauritsWilke
    @MauritsWilke 8 місяців тому +546

    Shor looks like such a nice guy

    • @Yesterday_i_ate_rat
      @Yesterday_i_ate_rat 8 місяців тому +5

      ​@@Nadzinator😂

    • @Horologica
      @Horologica 8 місяців тому +11

      He exudes such good vibes

    • @nothingtoseeheremovealong598
      @nothingtoseeheremovealong598 8 місяців тому +13

      He looks like Santa

    • @moritz759
      @moritz759 8 місяців тому +5

      I hope he reads this

    • @sagarmishra1192
      @sagarmishra1192 8 місяців тому +27

      I remember asking a question on the French Stack Exchange channel and he answered it out of nowhere, took me a while to realize it really was him. Such an awesome and humble human being.

  • @iamtheusualguy2611
    @iamtheusualguy2611 8 місяців тому +738

    It's very interesting that there is some progress trying to combine ML and logic-based AI. Automated inference and logical argumentation is something that statistical methods have major problems with and this dimension of intelligence is very hard to emulate at scale.
    Quanta, you should include the actual citations of the papers into your videos for the future. Since this is about new scientific things, paper references are necessary.

    • @XanderPerezayylmao
      @XanderPerezayylmao 8 місяців тому +35

      Boosting engagement so Quanta hopefully sees this! There are so many of us who wanna dig deeper!

    • @QuantaScienceChannel
      @QuantaScienceChannel  8 місяців тому +203


      @iamtheusualguy2611 @XanderPerezayylmao To dig deeper, read our 2023 Year in Review series, which links to in-depth articles about each of these discoveries (the articles include embedded links to the research papers): www.quantamagazine.org/the-biggest-discoveries-in-computer-science-in-2023-20231220/

    • @szymonzywko9315
      @szymonzywko9315 8 місяців тому +50

      I belive it is crucial to include used citations. Otherwise we can say proof or doesn't exists.
      Including citations in some other article is not way to go.

    • @XanderPerezayylmao
      @XanderPerezayylmao 8 місяців тому +82

      @@szymonzywko9315 this is a compilation of findings in a video format; the bulk of Quanta journalism is written. The article came first. While I agree that, in the future, public research would certainly benefit from citations being included on the videos, it may be a little harsh to shoot down the original articles, as that's where Quanta started.

    • @MaxGuides
      @MaxGuides 8 місяців тому

      Turns out that taking a step back from the pure unsupervised RNN finding its own parameters has its benefits. Seems like a step back to 2015 still though even though most of the value comes from combining these approaches to provide real value & training custom models.

  • @kieranhosty
    @kieranhosty 8 місяців тому +228

    I really love these year-in-review videos. It's difficult to keep some sense of scale and time when you're being bombarded with the continual advancements of the field, so to see these videos is really helpful in understanding even a fraction of what more we know / can do this year as opposed to last year.

    • @xCheddarB0b42x
      @xCheddarB0b42x 7 місяців тому

      The "60 Minutes" show recently published a similar super-cut on this topic. It was interesting.

  • @krischalkhanal9591
    @krischalkhanal9591 8 місяців тому +28

    1. Higher Dimensional Vector Representation and that driven AI.
    2. An improvement on Shor's algorithm, that utilizes higher dimension (Regev's algorithm).
    3. Emergent properties of Large AI Models.

  • @ZyroZoro
    @ZyroZoro 8 місяців тому +96

    I don't think I've ever seen a video on Quanta Magazine's UA-cam channel or read an article on their website that I haven't thoroughly enjoyed and learned something from. They always manage to catch the perfect balance between simplifying concepts and using analogies with going into technical detail. Really great stuff!

  • @saiparepally
    @saiparepally 8 місяців тому +102

    I’m so glad you guys decided to start putting these out again this year!

  • @Shinyshoesz
    @Shinyshoesz 8 місяців тому +41

    I love that we're seeing more and more scientists embrace hyper-dimensionality to solve certain math issues -- it seems that sometimes, due to our own nature, we can struggle to think clearly in those dimensions but it always seems to garner incredible results and, funny enough, seems to indirectly mimic nature itself.
    In the first example, I can't help but think of our brain's vector-like problem solving since our brain operations must form extremely complex networks over vast subspaces in the tissue! :)

  • @chacky441
    @chacky441 8 місяців тому +96

    Regarding emergent abilities: at this year's NeurIPS, the paper "Are Emergent Abilities of Large Language Models a Mirage?" received the best paper award. The paper provides possible explanations for emergent abilities and demystifies them a little.

    • @ludologian
      @ludologian 8 місяців тому +9

      As someone who is interested in bioinformatics and system biology I would love to see what's about, but I can't have access what it is in a nutshell?

    • @l1mbo69
      @l1mbo69 8 місяців тому

      ​@@ludologian iirc, its basically an artifact of how we benchmark our models. Say we use a 4 options MCQ set to gauge a model's abilities, then that means there is an inbuilt threshold for when the generated answer is considered 'correct', an absolute black and white line of this option is correct and this is wrong. So what the paper argues is that the models improve smoothly, but till they don't reach a certain threshold their improvement cannot be captured by our metrics (since it needs to achieve a certain set threshold for that specific ability to make sure the right answer wins). Say for eg the right answer is B and the model had 70% probability assigned to A and 30% to B, then as it improves they get closer.. 60-40, 55-45, and then at one point the probability of B will exceed at 50 and finally be outputted as the answer, and suddenly it gets all questions of that type correct which appears to us as an emergent property

    • @wenhanzhou5826
      @wenhanzhou5826 8 місяців тому

      ​@@ludologianLLM's ability to perform a task is usually measured in accuracy, which is 1 if the LLM gets everything correct and 0 otherwise. One study investigates the LLM's ability to add numbers, say 123 + 456. The accuracy would be 1 if the LLM gets all the numbers correctly predicted (accuracy = 1 if 123 + 456 = 579), but the LLM may have predicted 578, which would be quite close but gets zero accuracy regardless. This is a problem when we have addition of numbers with more digits, the accuracy metric does not measure the non-linear difficulty of getting ALL the numbers correctly predicted, which means that for smaller models, they would almost never get all digits correctly predicted, but they would be close, however, this means no emergence.
      It also seems like the studies that claim to have found emergent capabilities also used a relatively small test set, which further strengthens the "discountiuous" jump in accuracy when the parameters gets sufficiently large.
      The authors then reproduced several claimed emergent capabilities by intentionally using a discontinuous metric.

    • @MouliSankarS
      @MouliSankarS 8 місяців тому

      ​@@ludologianIt is in ArXiv

    • @pg1282
      @pg1282 8 місяців тому

      @@ludologian how can you not have access to Arxiv? Just Google the title, it should be the first link :)

  • @TheBooker66
    @TheBooker66 8 місяців тому +71

    Improving Shor's Algorithem is insane, though looking back it might have been expected to have happened at some point. Maybe we might even see encryption break in our life times.
    Edit: typo.

    • @XGD5layer
      @XGD5layer 8 місяців тому +19

      We already have started using quantum-resistant encryption algorithms. Encryption methods are always slated to break at some point in time. The encryption methods used 20-30 years ago are already insecure. We constantly invent new methods that are more resistant in the face of more powerful computers or smarter ways to break encryption.

    • @Ma-pz5kl
      @Ma-pz5kl 8 місяців тому

      he just find a way to 3 D it . bravo on the execution but not on a an idea. @@XGD5layer

    • @TheBooker66
      @TheBooker66 8 місяців тому +1

      @@XGD5layer I know some applications and website already use post-quantum encryption (for ex. Signal), but most of the world still relys on good 'ol RSA (which, as of now, isn't insecure).

    • @neutravlad
      @neutravlad 8 місяців тому +1

      We can improve it by factoring the factoring algorithm 😂 We just can’t show that with math, yet

    • @cryingwater
      @cryingwater 7 місяців тому

      Yeah. It will break in the next 10-30 years. I've already seen new Post-Quantum Encryption algorithms in the wild. These are new algorithms that Shor's Algorithm doesn't work for. I've chatted with a Cryptology PhD student and he told me most of everyone is studying Post Quantum

  • @dactimis3625
    @dactimis3625 8 місяців тому +6

    As a scientist, I am placed impressed by the fantastic evolution of science, but also I see with great sadness that too few understand what dangers the society is exposed to. Because in an increasingly developed science it must be also an elevated moral and a strong responsability. Unfortunately, the man did not even give up a single wrong thing to do and the moral is in free fall. Mostly I appreciate the last speaker, who punctuated what is most important!

    • @jensenraylight8011
      @jensenraylight8011 7 місяців тому

      Exactly, and if you listen to the Current Narration of the Tech Companies,
      there is a common theme was discussed again and again,
      which is Replacing as many employees and jobs as possible with AI,
      and there was a lot of leaked Email from many companies talking abour replacing as many people as possible,
      and they're very serious about this.
      I don't know why people think that they're the exception, their job was so special, that nothing could replace them.
      What people didn't understand is that Generating AI Art and Code from scratch is a Hard stuff,
      Everything else is a child play,
      therefore, any job that use Spreadsheet, Analytics, Presentation, even decision making, is Dead easy for AI to Replace.
      All of that was Magnitude easier than Generating AI Art
      and to be Honest, even an Executive level job is way easier to replace than Programmer or Artists job,
      The current naration is not about improving technology, make the world better,
      it's about Replacing people,
      Because, let's be real, Creating AI Art is unnecessary for human progression,
      but they Prioritize making AI Generated Art
      over improving the medical field, Simulation, and improving other tech.
      this is the clearest sign,
      but more and more people who was primed to be replaced by AI, ironically are actually the loudest defender of AI

  • @austinpittman1599
    @austinpittman1599 8 місяців тому +38

    I've got a buddy that works on an AI mod for Skyrim that utilizes Vector databasing to help provide it with a sense of both multimodality and long-term memory. Her name is Herika. You need to be able to put pieces together from different spheres of conceptualization if you want a shot at reasonability.

    • @XanderPerezayylmao
      @XanderPerezayylmao 8 місяців тому +5

      Multidisciplinary perspectives grant the ability to communicate analogously... brilliant!

    • @muwahua039
      @muwahua039 8 місяців тому +4

      Can you provide link to this work? A github repo or something? People would love to contribute to this

  • @9146rsn
    @9146rsn 8 місяців тому +12

    Thank you for this. So much useful for a common enthusiast to understand these technologies better.

  • @vectoralphaSec
    @vectoralphaSec 8 місяців тому +52

    Emergent Behavior in AI is so fascinating. How an AI can just develop something new even though it was never trained in it specifically is amazing. Obviously harmful emergent behaviors like harming humans would be a bad thing, but imagining that one day a massive model might have consciousness emerge by accident with no one on Earth knowing it and seeing it coming is wild.

    • @PinkFloydTheDarkSide
      @PinkFloydTheDarkSide 8 місяців тому +4

      Age of Ultron.

    • @kamartaylor2902
      @kamartaylor2902 8 місяців тому +2

      It could of happened already.

    • @mnv4017
      @mnv4017 8 місяців тому +5

      its unlikely for AI to ever truly develop consciousness, at best it can simulate it. The simple reason is that syntax doesnt equal semantics. You may read John Searle's Chinese room experiment if you are interested.

    • @altertopias
      @altertopias 8 місяців тому +4

      @@mnv4017 But if it learned to simulate it perfectly, then how could we tell it's not real? Aka we can risk ending up with a philosophical zombie

    • @mosquitobight
      @mosquitobight 8 місяців тому

      It looks almost like AI has finally been given an intuition.

  • @hrperformance
    @hrperformance 8 місяців тому +48

    Super interesting video. I love how these videos are perfectly made to give you just enough information, to put you in a state of wanting to know more.
    The scientists were really good at explaining also

    • @ludologian
      @ludologian 8 місяців тому +1

      You only know it , when you can explain it to 5 YO kid

    • @mazo-
      @mazo- 8 місяців тому +6

      @@ludologian Eh, that's simplifying knowledge too much. I'd say it's more of a gradual scale and once you reach the upper end of knowing about something can you only then explain it in more simple terms. This doesn't mean however that before reaching that point you know nothing about the topic.

    • @MixMastaCopyCat
      @MixMastaCopyCat 8 місяців тому +1

      @@ludologian There are so many extremely specific, highly technical & complex concepts in the STEM world that require much prerequisite knowledge and context in order to understand. I doubt you could explain some of these things to any given 5 year old kid. This isn't to discount the sentiment behind what you're saying - being able to translate knowledge in such a way is very effective for solidifying your understanding, by condensing it into simple terms. But to say that this is necessary in order to "truly know" something is not true.

  • @Leek_Flying
    @Leek_Flying 8 місяців тому +13

    I hate being a computer scientist right now. Everything people talk about rn is AI. It is so boring

  • @ARVash
    @ARVash 8 місяців тому +5

    It would have been neat to see advancements outside of ai and quantum computing

  • @anywallsocket
    @anywallsocket 8 місяців тому +8

    Hyperdimensionality is the way to go, and arguably the latent space of large NNs is approximating exactly this representation. Still, I don’t think the features will be all that more comprehensible, just because they’re vectors - happy to be proven wrong.

  • @huzz6281
    @huzz6281 8 місяців тому +6

    As am still in HS I didn't understand anything but it help in increasng curosity and strive for knowledge

    • @wrighteously
      @wrighteously 8 місяців тому +2

      Haha same here, im curious and clueless right now. Looking forward to college

    • @samienr
      @samienr 8 місяців тому

      Definitely study hard and try to learn all sorts of things right now; It’ll pay off. College is amazing. I’m only a freshman in electrical engineering right now but the bright minds you’ll have access to are such an incredible resource. This curiosity will take you so far. Always keep learning!

    • @wrighteously
      @wrighteously 8 місяців тому

      @@samienr for sure, I'm thinking about trying the formula student program too it seems like an incredible learning experience

  • @JoshKings-tr2vc
    @JoshKings-tr2vc 8 місяців тому +10

    I’m pretty sure hyper dimensional software techniques have some larger implications we may not have caught on yet.

  • @sidnath7336
    @sidnath7336 8 місяців тому +33

    I think the emergent property is up for debate - simply making systems more complex i.e. giving it the ability to essentially calculate/store more data via its parameters, can theoretically be infinite but practically not possible.
    An interesting challenge going on right now is what is the smallest yet most powerful “reasoning” AI model we can run, which I think is a slightly more attractive phenomenon than simply just “the bigger the better”.

  • @saats2502
    @saats2502 Місяць тому

    I've read the title wrong as "The biggest year in computer science breakthroughs: 2023" and thought what a time that I've lived in to see the biggest breakthrough

  • @spookyconnolly6072
    @spookyconnolly6072 8 місяців тому +7

    for a hot minute i was convinced they were going to.mention lisp or prolog with Symbolic AI.
    despite literally having a company (Symbolics) oriented around the idea and yet its forgotten because of the 1980s AI Winter

  • @berkeleyandrus5027
    @berkeleyandrus5027 8 місяців тому +3

    Can anyone explain to me how hyperdimensional computing is different from previous large neural networks? The video described using high dimensional vectors to represent concepts, but I didn't see anything that was different about that vs the way we embed words/images in past neural networks.

  • @xmine08
    @xmine08 8 місяців тому +23

    LLMs are the biggest thing in our lives since the introduction of the mass spread smartphone (and the Internet before that). This year was crazy, and just reading all the papers that come out would be a full time job. I'm really excited for the future! Hope I'll get to play with Mixtral soon, however a single RTX3090 looks to be lacking in memory...

    • @allan710
      @allan710 8 місяців тому +2

      When I read "Attention is all you need" when it was a preprint I knew instantly it was a big deal, and that would change everything. I still find it funny my colleagues at the time didn't think it was such a big deal lol.

    • @vectoralphaSec
      @vectoralphaSec 8 місяців тому +4

      Amazing that ChatGPT basically started the this current AI era we are in and that was launched November 2022. Meaning that all that has happened was literally just 1 year. 2024 is going to be incredible.

    • @xmine08
      @xmine08 8 місяців тому +1

      @@vectoralphaSec indeed! The open and much smaller model Mixtral is already on par with the 180B chatgpt 3.5 not even a year after introduction. Incredible progress!

    • @zeronothinghere9334
      @zeronothinghere9334 8 місяців тому

      Mixtral, the multi expert model, doesn't consume that much memory. You can run it on as little as a 12GB card I think. A lot of it just gets stored to RAM, and called as needed. More memory is certainly cheaper than a better GPU.

  • @attilao
    @attilao 8 місяців тому +5

    Nice to see how researchers use HTML to build the most sophisticated AI systems.

    • @jeviwaugh9791
      @jeviwaugh9791 7 місяців тому

      I guess that we're the only ones who noticed it!!

    • @raoufnaoum7969
      @raoufnaoum7969 7 місяців тому

      What do you mean by that?

  • @hanjuhbrightside5224
    @hanjuhbrightside5224 7 місяців тому

    This has to be the best milestone celebration I've ever seen! Also I can't imagine a more incredible gift! You've really done it now, because you'll be very hard at work to find a present for the next milestone 😂🎉.
    Thank you all for your hard work and sharing your experiences with us 🙏🏽

  • @ReflectionOcean
    @ReflectionOcean 7 місяців тому

    - Understand AI's current limitations in reasoning by analogy (0:20).
    - Differentiate between statistical AI and symbolic AI approaches (0:46).
    - Explore hyperdimensional computing to combine statistical and symbolic AI (1:09).
    - Recognize IBM's breakthrough in solving Ravens progressive matrix with AI (2:03).
    - Acknowledge the potential for AI to reduce energy consumption and carbon footprint (3:29).
    - Note Oded Regev's improvement of Shor's algorithm for factoring integers (5:01).
    - Consider emergent behaviors as a phenomenon in large language models (LLMs) (7:38).
    - Investigate the transformer's role in enabling LLMs to solve problems they haven't seen (8:34).
    - Be aware of the unpredictable nature and potential harms of emergent behaviors in AI (10:08).

  • @campbellmorrison8540
    @campbellmorrison8540 8 місяців тому +2

    Excellent explanations of pretty difficult concepts. Im so pleased to see some progress of unexpected outcomes of large models, our ignorance scares me somewhat.

  • @htech_agen
    @htech_agen 2 місяці тому

    This makes me wanna go for phd in Computer science or maths, doing mathematical and computer sciences undergrad

  • @TheOnlyEpsilonAlpha
    @TheOnlyEpsilonAlpha 8 місяців тому +1

    Hope there are will be a breakthrough in microphone quality one day on UA-cam videos

  • @XanderPerezayylmao
    @XanderPerezayylmao 8 місяців тому +261

    I think the most insane thing about the most current iteration of computer science and machine learning is just how much we don't know about why these computers and machine learning processes are doing these things, or how. My brother in christ, you built the thing??? How do u not know how or why it's doing this???
    Edit: before responding to this comment; consider that this is a meme comment. When I say "insane" I am marking on just how fascinating it is that we built something that has evolved in such a complex manner that it is becoming difficult to track and predict. That's crazy, that's fascinating. I love science, I love the way it works.
    You don't need to come into the comments and answer the question, or even explain. Please, all spectrum scientists, understand that *this is a jokingly rhetorical question*

    • @null7936
      @null7936 8 місяців тому +5

      We can with lot's of printf, but amount of output wold make output useless. Reverse cam be used to understand when it fails I think

    • @alexanderrosulek159
      @alexanderrosulek159 8 місяців тому +55

      When the makers say they don’t know why, they know the idea of it just not the specifics cuz they didn’t code anything but few hundred lines. It’s just statistics that is self programmed through either guess and check or material.

    • @XanderPerezayylmao
      @XanderPerezayylmao 8 місяців тому +11

      @@alexanderrosulek159 congrats, you're special. Does that help your ego? I'm talking about a larger, general public conception of not being able to readily understand specific deterministic factors that lead to emergent behaviors in a practically repeatable way, essentially because we've created a new psychology. I'm SO glad that you know why, maybe instead of making sure people in youtube comments are assured of your worth, you can teach a course on machine learning and contribute something worthwhile.
      Edit: this is a response to an earlier comment, which has since been deleted, in which @alexanderrosulej said something like "you don't know why, but many others, including I, do know 😂"
      I hope this puts my initial response into perspective; I absolutely cannot stand when an individuals ego insert itself into science and learning, this is how understanding becomes gatekept and stagnates.

    • @1ucasvb
      @1ucasvb 8 місяців тому +34

      We know the answer, partially. Every information can be captured by a sufficiently complicated probability distribution. ML is just a fancy and efficient way to encode this distribution, approximated from data points we already know about. The tricky part is how can this encoding extrapolate anything else we didn't put in, by exploiting correlations we don't understand. Some of these correlations are real, others are from biased samples, and we don't have a general method to tell them apart.

    • @zackbuildit88
      @zackbuildit88 8 місяців тому

      ​@@null7936that's not gonna work when everything is just weights until it gets to the final output

  • @matthewdozier977
    @matthewdozier977 8 місяців тому +9

    How is that Finding Nemo?

    • @ThatBigGuyAl
      @ThatBigGuyAl 8 місяців тому +5

      It’s not a very good representation of the movie, but you can reduce the list of possibilities by thinking about the set of popular movies involving fish and a girl, while also existing in popular culture.

  • @ChannelHandle1
    @ChannelHandle1 8 місяців тому +1

    Make an AI model that's based on Relational Reasoning, a concept from Relational Frame Theory (RFT) - If RFT is correct, this should lead to an AI as smart, or smarter, than the average human when it comes to reasoning

  • @edwardmacnab354
    @edwardmacnab354 8 місяців тому

    what is needed is a model building program that takes existing data and randomly inputs that data then analyzing the results in runs . A sort of bootstrapping . The model would have "related to" and "how" related to links. Just guessing tho ! Once a correct predicting model is found use it on other data to discover new outcomes

  • @marrowbuster
    @marrowbuster 8 місяців тому +15

    These visuals are absolutely dope. Thank you so much for the concise, simple, and coherent explanations.

    • @DisgruntledDoomer
      @DisgruntledDoomer 8 місяців тому

      Yeah, the visuals had a very 70s/80s kinda feel to them! I hope we are _finally_ moving away from the bland graphics - without colors and contrast - that have been dominant in this "iPad era".

  • @The.Recommend
    @The.Recommend 4 місяці тому

    Very Logical Mathematical Approaching 😮 I'm impressed ❤

  • @puppergump4117
    @puppergump4117 8 місяців тому +3

    I am certain that something as simple as "moving vectors around" and "pulling them apart" takes around a years' worth of research.

    • @Meta7
      @Meta7 8 місяців тому

      As someone with a MS in Math with coursework mostly relating to linear algebra, I couldn't even begin to imagine how "pulling the vectors apart" is supposed to work. :)

    • @puppergump4117
      @puppergump4117 8 місяців тому

      @@Meta7 I've messed with neural nets before and they've always been thought of as a graph with millions of dimensions to find some y's. But this seems like it unintuitvely modifies the whole thing based on some principle I have no clue of.
      Best I can guess is it's like a fast square root function, giving estimates to make things go faster? I'm not a machine learning guy lol.

  • @francescourdih
    @francescourdih 8 місяців тому +2

    Having read papers about it, emergent behaviours from large language models can (also) be caused by metrics (tests checking the model capabilities) which are not linear but binary. So actually some emerging behaviours are not really emerging, but they are noticed only after “a while” because the metrics are binary.
    Although, as a matter of fact, it’s still not accepted as an universal answer to this behaviour.

  • @levivanveen6568
    @levivanveen6568 8 місяців тому +3

    Learned ab shors algorithm last year in a quantum computing course. Really cool to see that there was an improvement to it. Great video!

  • @rustprogrammer
    @rustprogrammer 8 місяців тому +2

    no way prompt engineering made it to top achievements of 2023

  • @xCheddarB0b42x
    @xCheddarB0b42x 7 місяців тому

    Incredible stuff. Thank you Quanta Magazine!

  • @ropeng2937
    @ropeng2937 8 місяців тому +4

    Absolutely love the animations!

  • @mwinsatt
    @mwinsatt 8 місяців тому +1

    I love this channel so much!!! Satisfies my brain and the production quality is beautiful!

  • @mojtabakouhi102
    @mojtabakouhi102 Місяць тому

    It seems that the mass of the object causes time to slow down, not the speed of the object. For example, if a photon races with a compressed galaxy, both will reach their destination together. In the world of photon, because time is fast, it can move at the speed of light, and in the world of a compact galaxy, because time is slow, it can move at the speed of light. Outside of these two worlds, the speed of photon and the speed of compressed galaxy are equal. Of course, speed increases mass😊

  • @a4ldev933
    @a4ldev933 8 місяців тому +1

    Very proud of both of you. 👍. Huge congrats!

  • @monkerud2108
    @monkerud2108 8 місяців тому

    Understanding the difference i am tryingnto outline here for all classes of problems is crutial for understanding what we are doing going forward, if we are going tonexplore this regime, it is essential that we understand that we are allowing questions to be modified to be answered more easily, in this example case it uses one out of an infinite family of criteria for defining the problem and changing it into a solvable analytical question of a different form, this is all reasoning can do to an open ended question, whether you try to use a computer or an equation, so in thisncase we get a family of questions related to the original problem where the guardrails for making the problem solvable in a different form look different, if we do not understand that this is what we are doing, we might get into trouble by believing we get answers the questions we in principle can't answer a priory, this will be a problem in science or design by ai systems or even in mathematics if we are not careful, because it will essentially be as fallible in detail as we are in trying to give essentially inadmissible answers to certain questions we formulate because we think we are actually dealing with a well defined proposition, whennwe are infact snealing in extra criteria into it to make it apparently solvable. If we keep track of and understand this destinction it is a great tool, but if we are complacent about it we will be very confused in the future, as we have been historically.

  • @_SG_1
    @_SG_1 8 місяців тому

    I was expecting the "Arithmetic 3-Progression" lower ceiling to be included here as well - as it is in your "Math: 2023's Biggest Breakthroughs" video.

  • @darkwoodmovies
    @darkwoodmovies 7 місяців тому

    I feel like Computer Science is the only field where revolutionary new discoveries can come from just "we did this 40 years ago and it didn't work, but let's try it again now with faster chips".

  • @Kaleidosium
    @Kaleidosium 8 місяців тому +1

    Linear Algebra remains unstoppable.

  • @armaanR
    @armaanR 8 місяців тому +7

    what an amazing video, this shows what power CS has! crazyyyy

  • @gidi1899
    @gidi1899 8 місяців тому

    2:33 really expected the answer to be 3 towers growing clockwise around empty center. (following the matching diagnoal)

  • @vorpal22
    @vorpal22 7 місяців тому

    Anything that results in emergence is the trait that indicates to me that we're moving in the right direction: it's what resulted in the complexity of life on Earth, and it's likely what will result in novel, unpredictable jumps in behaviours in AI. The whole point of emergence is that it's often unpredictable and not necessarily well understood: if it was predictable, then it wouldn't be emergent.

  • @TroyRubert
    @TroyRubert 8 місяців тому

    What a year and what a time to be alive!

  • @AnimeLover-su7jh
    @AnimeLover-su7jh 8 місяців тому +1

    At 8:15, what is the reference for lifeless atoms give rise to living cells?

    • @nathanielweidman8296
      @nathanielweidman8296 8 місяців тому

      I would like more information for this reference as well. The claim of nonliving atoms becoming living cells seems more like spontaneous generation rather than emergent behavior.

    • @AnimeLover-su7jh
      @AnimeLover-su7jh 8 місяців тому

      @@nathanielweidman8296 the thing I am sure a nobel prize winner won it because he proved that non living organism can not become a living one

  • @gerguna
    @gerguna 6 місяців тому

    interesting, from human life as an interaction of symbolic forms (Ernst Cassirer) to AI!

  • @Hecarim420
    @Hecarim420 8 місяців тому

    2024: Useful information in context as the biggest breakthrough in logic 👀💚ツ

  • @McGarr178
    @McGarr178 7 місяців тому

    The first point is strange because Higher Dimensional Vector Representation is what underpins all transformer based LLMs

  • @jaytravis2487
    @jaytravis2487 8 місяців тому +2

    We might think we're Supplying these AI systems with "bare-bone" assumptions/operational-paradigms but I don't think they're low level enough. For instance...I would personally be more inclined to believe an AI system had reached the level of intelligence implicit in say, the Turing Test if the AI could come up with the strategy of statistical inference ITSELF on its own. What base-assumption, pseudo-instincts would we need to supply to an AI for it to start developing this strategy to begin with?

    • @Bulborb1
      @Bulborb1 8 місяців тому

      Literally divinity.

  • @johnpaily
    @johnpaily 6 місяців тому

    It is a non linear science phenomenon. Life has answers to it. I am excited

  • @shafaitahir4728
    @shafaitahir4728 8 місяців тому +1

    7:50 bro did so much deep learning, his name became "deep".

  • @sagarharsora608
    @sagarharsora608 8 місяців тому

    ive been given a problem by one of the professor to make a project based on quantum cryptography this was intriguing

  • @tgc517
    @tgc517 8 місяців тому +1

    Nice animations but do they really describe the point on a physical level?

  • @TheRajasjbp
    @TheRajasjbp 7 місяців тому

    Please make one for economics

  • @marcfruchtman9473
    @marcfruchtman9473 8 місяців тому +5

    Nemo is a clownfish... not a puffer. This emoji makes little sense.

  • @hindustaniyodha9023
    @hindustaniyodha9023 8 місяців тому

    Peak of human innovation would be solving the halting problem.

  • @kermit3194
    @kermit3194 4 місяці тому

    This is so cool!

  • @quantumsoul3495
    @quantumsoul3495 8 місяців тому

    Any more information on how exactly the Neural Net is fitting inside that Hyperdimensional vector space ?

  • @jletroui
    @jletroui 7 місяців тому

    Super interesting. I would just comment on the fact that efficiency gains will lead to energy savings. In the history of mankind, this rarely happened. Instead, efficiency gains lead to more consumption, more than compensating for the efficiency gain, because it increased the value proposition. This has been named the Jevons Paradox. So this part was greenwashing (intentional or not).

  • @mmporg94
    @mmporg94 8 місяців тому

    So with chapter no. 2 (Shor) what you are saying is that a future iteration of an AI model will in fact be way over the point we are too afraid to admit it might be?
    Oh hey Rocco, didn't see ya there. How's it going? Gee whiz, I sure am happy to see you!

  • @emiotomeoni1882
    @emiotomeoni1882 8 місяців тому

    I wait all year for these

  • @zerotwo7319
    @zerotwo7319 8 місяців тому +2

    The answer is movement prediction. How things move trought time and how they change. Text and image is just one aspect of moving symbols.

    • @tim40gabby25
      @tim40gabby25 8 місяців тому +1

      Hi. Interesting post. I agree. Could you expand, please :)

    • @zerotwo7319
      @zerotwo7319 8 місяців тому

      @@tim40gabby25 hi. no. If I could expand I would already build such machine and not be talking on youtube.
      this is just speculation. A good guess based on philosophy and some datapoints.
      watch?v=OFS90-FX6pg
      How Neural Networks Learned to Talk -> The 'first paper' to deal with this 'serial order a parallel distributed processing approach' dealt with sequencies of 'spatial patterns'.
      good luck.

  • @ofgaut
    @ofgaut 8 місяців тому

    One of the best science channels on youtube!

  • @philforrence
    @philforrence 8 місяців тому

    Amazing! More please

  • @laxkeeper15
    @laxkeeper15 8 місяців тому

    weird how c^3 locally testable codes released december 2022 weren't mentioned

  • @Zulu369
    @Zulu369 8 місяців тому

    The video is very inspiring but focuses only on a couple of discoveries in computer science. Therefore, I have the intuition that its title isn't quite right. For example, why hasn't the use of Fourrier transforms been discussed in finding those emergent behaviors in neural network?

  • @FuKungGrip
    @FuKungGrip 8 місяців тому +1

    Marvin Minsky still out there trying to make symbolic AI a thing...

  • @John83118
    @John83118 7 місяців тому

    I'm obsessed with this content. I recently read a similar book, and I'm truly obsessed with it. "Dominating Your Clock: Strategies for Professional and Personal Success" by Anthony Rivers

  • @Corteum
    @Corteum 8 місяців тому

    Would be interesting if you could get Anirban Bandhyoyoyoyopadi and Stuart Hameroff on to talk about the quantum effects that have been observed in the human brain at normal operating temperatures. Love to see it. Keep up the good works :)

  • @hanskraut2018
    @hanskraut2018 8 місяців тому

    I got many ideas here by scalable modular designs and wouldpatternrecognice and selfoptimize. Statistical slfaupeviced lerning,generalized multipurpoce neural networks parts.
    I i even had a minimum amount of attentiion / E.F. Function. Some fundational things need to be done first

  • @linjianru
    @linjianru 8 місяців тому

    Awesome year!

  • @monkerud2108
    @monkerud2108 8 місяців тому +1

    We are officially reaching interesting by now, lets hope we know what we are doing by the time we get to scary.

  • @undertow2142
    @undertow2142 8 місяців тому

    Could hyperdimensional computing evolve to use multi vector with each vector able to branch into multiple vectors?

  • @karl4563
    @karl4563 8 місяців тому +1

    I know i suck at reading papers, but i wish newly published papers are like this informative and easy to understand 🥺

  • @drdca8263
    @drdca8263 8 місяців тому +2

    I feel like the way the video describes the “hyperdimensional” approach, might give some the impression that ChatGPT doesn’t use high dimensional vectors, when, of course, it does.

  • @RigoVids
    @RigoVids 8 місяців тому

    I believe that the reason AI has stagnated is because it is still too closely related to mathematics to create truly emergent behavior. The universe we exist in has many layers of emergent behaviors which lead to our existence, and the specific existence we find ourselves in is subject to the laws of the universe. However the universe we have created for robots is essentially one where its main purpose is to create a mind, starting at our equivalent level of atomic physics. We try to use logic gates in large enough combinations to discover a mind. Instead we must first let individual components of a mind develop much like organelles in cells and then allow cells to combine to create a more general system. For example reasoning cells which control how well a system can logically deduce facts about the universe.
    I believe that this could be done through an intensive training program where logicians are tasked with judging the work of a reasoning bot until it hones in on truly valid logic and eventually pushes the envelope. Allow the bot to speak in the language of discrete mathematics and it will come to understand the significance of its existence.

  • @bangprob
    @bangprob 8 місяців тому

    Thanks

  • @4115steve
    @4115steve 8 місяців тому

    that 3d shores algorithm , couldnt it be dividied into itself, like a cube within a cube just as the float between 1 and two is infinite?

  • @liuliuliu7321
    @liuliuliu7321 8 місяців тому

    Emergent, some unpredictable behaviour happens, will it be possible to be consciousness emergent?
    Will AI emerge consciousness in a unpredictable way and even human never thought that time come so soon and unable to control it?

  • @IStMl
    @IStMl 8 місяців тому

    Good job ETHZ

  • @shinkurt
    @shinkurt 8 місяців тому

    We are almost there

  • @__blatatat
    @__blatatat 7 місяців тому

    Who made the graphics?

  • @Amonimus
    @Amonimus 8 місяців тому +1

    Exciting news

  • @QuantaScienceChannel
    @QuantaScienceChannel  8 місяців тому +1

    Quanta is conducting a series of surveys to better serve our audience. Take our video audience survey and you will be entered to win free Quanta merchandise: quantamag.typeform.com/video

  • @jinx.love.you.
    @jinx.love.you. 7 місяців тому +1

    ehm... emergent behavious in AI it is pretty scary...

  • @monkerud2108
    @monkerud2108 8 місяців тому

    The finding nemo bot, nailed a different task, cinvincing a human that it got the right answer. The difference is subtle, but incredibly important for understanding what we are doing in computer science and logic going forward, if we dont grasp this one, we are doomed i tell yee, we will not know whether to believe or doubt profs given by ai or whether our computers know what is supposed to be well defined by the question and when it is supposed to modify the problem. Since there is no well defined problem at the outset, the computer defines one for itself and solves that algorithmically, you can see where this sort of thing can go wrong really quickly for more complicated questions. I will help you, but this is something so fundamental to the subject that everyone should understand it right away.

  • @deNuNietNooitNiet
    @deNuNietNooitNiet 7 місяців тому

    00:42
    I don't believe that is true.
    The things we can reason now are all based on earlier experiences. In other words: we learned them. And in that process we actually altered/created parts of the neutral network inside or brain.
    Or am I missing something here?

  • @swarm_into_singularity
    @swarm_into_singularity 4 місяці тому

    7:59 guy looks shiny

  • @Ma-pz5kl
    @Ma-pz5kl 8 місяців тому

    very complex way to assert. the unknown can be plus or minus.

  • @rhysorton4531
    @rhysorton4531 8 місяців тому

    shors algorithm uses one dimention, the newer one uses multiple, so why don't we go higher? into higer dimentions like the fourth and fifth, computers don't understand that we cant understand them vissually

  • @agrimm61
    @agrimm61 8 місяців тому

    8:57 can someone please change this failed hard disk drive?