2023's Biggest Breakthroughs in Computer Science

Поділитися
Вставка
  • Опубліковано 19 гру 2023
  • Quanta Magazine’s computer science coverage in 2023 included progress on new approaches to artificial intelligence, a fundamental advance on a seminal quantum computing algorithm, and emergent behavior in large language models.
    Read about more breakthroughs from 2023 at Quanta Magazine: www.quantamagazine.org/the-bi...
    00:05 Vector-Driven AI
    As powerful as AI has become, the artificial neural networks that underpin most modern systems share two flaws: They require tremendous resources to train and operate, and it’s too easy for them to become inscrutable black boxes. Researchers have developed a new approach called hyperdimensional computing which is more versatile, making its computations far more efficient while also giving researchers greater insight into the model’s reasoning.
    - Original story with links to research papers can be found here: www.quantamagazine.org/a-new-...
    04:01 Improving the Quantum Standard
    For decades, Shor’s algorithm has been the paragon of the power of quantum computers. This set of instructions allows a machine that can exploit the quirks of quantum physics to break large numbers into their prime factors much faster than a regular, classical computer - potentially laying waste to much of the internet’s security systems. In August, a computer scientist developed an even faster variation of Shor’s algorithm, the first significant improvement since its invention.
    - Original story with links to research papers can be found here: www.quantamagazine.org/thirty...
    07:14 The Powers of Large Language Models
    Get enough stuff together, and you might be surprised by what can happen. This year, scientists found so-called “emergent behaviors,” in large language models - AI programs trained on enormous collections of text to produce humanlike writing. After these models reach a certain size, they can suddenly do unexpected things that smaller models can’t, such as solving certain math problems.
    - Original story with links to research papers can be found here: www.quantamagazine.org/the-un...
    - VISIT our Website: www.quantamagazine.org
    - LIKE us on Facebook: / quantanews
    - FOLLOW us Twitter: / quantamagazine
    Quanta Magazine is an editorially independent publication supported by the Simons Foundation: www.simonsfoundation.org/
  • Наука та технологія

КОМЕНТАРІ • 390

  • @MauritsWilke
    @MauritsWilke 4 місяці тому +513

    Shor looks like such a nice guy

    • @Yesterday_i_ate_rat
      @Yesterday_i_ate_rat 4 місяці тому +5

      ​@@Nadzinator😂

    • @Horologica
      @Horologica 4 місяці тому +9

      He exudes such good vibes

    • @nothingtoseeheremovealong598
      @nothingtoseeheremovealong598 4 місяці тому +11

      He looks like Santa

    • @moritz759
      @moritz759 4 місяці тому +4

      I hope he reads this

    • @sagarmishra1192
      @sagarmishra1192 4 місяці тому +24

      I remember asking a question on the French Stack Exchange channel and he answered it out of nowhere, took me a while to realize it really was him. Such an awesome and humble human being.

  • @iamtheusualguy2611
    @iamtheusualguy2611 4 місяці тому +725

    It's very interesting that there is some progress trying to combine ML and logic-based AI. Automated inference and logical argumentation is something that statistical methods have major problems with and this dimension of intelligence is very hard to emulate at scale.
    Quanta, you should include the actual citations of the papers into your videos for the future. Since this is about new scientific things, paper references are necessary.

    • @XanderPerezayylmao
      @XanderPerezayylmao 4 місяці тому +35

      Boosting engagement so Quanta hopefully sees this! There are so many of us who wanna dig deeper!

    • @QuantaScienceChannel
      @QuantaScienceChannel  4 місяці тому +198


      @iamtheusualguy2611 @XanderPerezayylmao To dig deeper, read our 2023 Year in Review series, which links to in-depth articles about each of these discoveries (the articles include embedded links to the research papers): www.quantamagazine.org/the-biggest-discoveries-in-computer-science-in-2023-20231220/

    • @szymonzywko9315
      @szymonzywko9315 4 місяці тому +47

      I belive it is crucial to include used citations. Otherwise we can say proof or doesn't exists.
      Including citations in some other article is not way to go.

    • @XanderPerezayylmao
      @XanderPerezayylmao 4 місяці тому +78

      @@szymonzywko9315 this is a compilation of findings in a video format; the bulk of Quanta journalism is written. The article came first. While I agree that, in the future, public research would certainly benefit from citations being included on the videos, it may be a little harsh to shoot down the original articles, as that's where Quanta started.

    • @MaxGuides
      @MaxGuides 4 місяці тому

      Turns out that taking a step back from the pure unsupervised RNN finding its own parameters has its benefits. Seems like a step back to 2015 still though even though most of the value comes from combining these approaches to provide real value & training custom models.

  • @kieranhosty
    @kieranhosty 4 місяці тому +223

    I really love these year-in-review videos. It's difficult to keep some sense of scale and time when you're being bombarded with the continual advancements of the field, so to see these videos is really helpful in understanding even a fraction of what more we know / can do this year as opposed to last year.

    • @xCheddarB0b42x
      @xCheddarB0b42x 3 місяці тому

      The "60 Minutes" show recently published a similar super-cut on this topic. It was interesting.

  • @ZyroZoro
    @ZyroZoro 4 місяці тому +93

    I don't think I've ever seen a video on Quanta Magazine's UA-cam channel or read an article on their website that I haven't thoroughly enjoyed and learned something from. They always manage to catch the perfect balance between simplifying concepts and using analogies with going into technical detail. Really great stuff!

  • @krischalkhanal9591
    @krischalkhanal9591 4 місяці тому +24

    1. Higher Dimensional Vector Representation and that driven AI.
    2. An improvement on Shor's algorithm, that utilizes higher dimension (Regev's algorithm).
    3. Emergent properties of Large AI Models.

  • @saiparepally
    @saiparepally 4 місяці тому +102

    I’m so glad you guys decided to start putting these out again this year!

  • @Shinyshoesz
    @Shinyshoesz 4 місяці тому +41

    I love that we're seeing more and more scientists embrace hyper-dimensionality to solve certain math issues -- it seems that sometimes, due to our own nature, we can struggle to think clearly in those dimensions but it always seems to garner incredible results and, funny enough, seems to indirectly mimic nature itself.
    In the first example, I can't help but think of our brain's vector-like problem solving since our brain operations must form extremely complex networks over vast subspaces in the tissue! :)

  • @chacky441
    @chacky441 4 місяці тому +95

    Regarding emergent abilities: at this year's NeurIPS, the paper "Are Emergent Abilities of Large Language Models a Mirage?" received the best paper award. The paper provides possible explanations for emergent abilities and demystifies them a little.

    • @ludologian
      @ludologian 4 місяці тому +9

      As someone who is interested in bioinformatics and system biology I would love to see what's about, but I can't have access what it is in a nutshell?

    • @l1mbo69
      @l1mbo69 4 місяці тому

      ​@@ludologian iirc, its basically an artifact of how we benchmark our models. Say we use a 4 options MCQ set to gauge a model's abilities, then that means there is an inbuilt threshold for when the generated answer is considered 'correct', an absolute black and white line of this option is correct and this is wrong. So what the paper argues is that the models improve smoothly, but till they don't reach a certain threshold their improvement cannot be captured by our metrics (since it needs to achieve a certain set threshold for that specific ability to make sure the right answer wins). Say for eg the right answer is B and the model had 70% probability assigned to A and 30% to B, then as it improves they get closer.. 60-40, 55-45, and then at one point the probability of B will exceed at 50 and finally be outputted as the answer, and suddenly it gets all questions of that type correct which appears to us as an emergent property

    • @wenhanzhou5826
      @wenhanzhou5826 4 місяці тому

      ​@@ludologianLLM's ability to perform a task is usually measured in accuracy, which is 1 if the LLM gets everything correct and 0 otherwise. One study investigates the LLM's ability to add numbers, say 123 + 456. The accuracy would be 1 if the LLM gets all the numbers correctly predicted (accuracy = 1 if 123 + 456 = 579), but the LLM may have predicted 578, which would be quite close but gets zero accuracy regardless. This is a problem when we have addition of numbers with more digits, the accuracy metric does not measure the non-linear difficulty of getting ALL the numbers correctly predicted, which means that for smaller models, they would almost never get all digits correctly predicted, but they would be close, however, this means no emergence.
      It also seems like the studies that claim to have found emergent capabilities also used a relatively small test set, which further strengthens the "discountiuous" jump in accuracy when the parameters gets sufficiently large.
      The authors then reproduced several claimed emergent capabilities by intentionally using a discontinuous metric.

    • @MouliSankarS
      @MouliSankarS 4 місяці тому

      ​@@ludologianIt is in ArXiv

    • @pg1282
      @pg1282 4 місяці тому

      @@ludologian how can you not have access to Arxiv? Just Google the title, it should be the first link :)

  • @9146rsn
    @9146rsn 4 місяці тому +12

    Thank you for this. So much useful for a common enthusiast to understand these technologies better.

  • @TheBooker66
    @TheBooker66 4 місяці тому +71

    Improving Shor's Algorithem is insane, though looking back it might have been expected to have happened at some point. Maybe we might even see encryption break in our life times.
    Edit: typo.

    • @XGD5layer
      @XGD5layer 4 місяці тому +19

      We already have started using quantum-resistant encryption algorithms. Encryption methods are always slated to break at some point in time. The encryption methods used 20-30 years ago are already insecure. We constantly invent new methods that are more resistant in the face of more powerful computers or smarter ways to break encryption.

    • @Ma-pz5kl
      @Ma-pz5kl 4 місяці тому

      he just find a way to 3 D it . bravo on the execution but not on a an idea. @@XGD5layer

    • @TheBooker66
      @TheBooker66 4 місяці тому +1

      @@XGD5layer I know some applications and website already use post-quantum encryption (for ex. Signal), but most of the world still relys on good 'ol RSA (which, as of now, isn't insecure).

    • @neutravlad
      @neutravlad 4 місяці тому +1

      We can improve it by factoring the factoring algorithm 😂 We just can’t show that with math, yet

    • @cryingwater
      @cryingwater 3 місяці тому

      Yeah. It will break in the next 10-30 years. I've already seen new Post-Quantum Encryption algorithms in the wild. These are new algorithms that Shor's Algorithm doesn't work for. I've chatted with a Cryptology PhD student and he told me most of everyone is studying Post Quantum

  • @campbellmorrison8540
    @campbellmorrison8540 4 місяці тому +2

    Excellent explanations of pretty difficult concepts. Im so pleased to see some progress of unexpected outcomes of large models, our ignorance scares me somewhat.

  • @mwinsatt
    @mwinsatt 4 місяці тому +1

    I love this channel so much!!! Satisfies my brain and the production quality is beautiful!

  • @a4ldev933
    @a4ldev933 4 місяці тому +1

    Very proud of both of you. 👍. Huge congrats!

  • @ropeng2937
    @ropeng2937 4 місяці тому +4

    Absolutely love the animations!

  • @hrperformance
    @hrperformance 4 місяці тому +49

    Super interesting video. I love how these videos are perfectly made to give you just enough information, to put you in a state of wanting to know more.
    The scientists were really good at explaining also

    • @ludologian
      @ludologian 4 місяці тому +1

      You only know it , when you can explain it to 5 YO kid

    • @mazo-
      @mazo- 4 місяці тому +6

      @@ludologian Eh, that's simplifying knowledge too much. I'd say it's more of a gradual scale and once you reach the upper end of knowing about something can you only then explain it in more simple terms. This doesn't mean however that before reaching that point you know nothing about the topic.

    • @MixMastaCopyCat
      @MixMastaCopyCat 4 місяці тому +1

      @@ludologian There are so many extremely specific, highly technical & complex concepts in the STEM world that require much prerequisite knowledge and context in order to understand. I doubt you could explain some of these things to any given 5 year old kid. This isn't to discount the sentiment behind what you're saying - being able to translate knowledge in such a way is very effective for solidifying your understanding, by condensing it into simple terms. But to say that this is necessary in order to "truly know" something is not true.

  • @berkeleyandrus5027
    @berkeleyandrus5027 4 місяці тому +3

    Can anyone explain to me how hyperdimensional computing is different from previous large neural networks? The video described using high dimensional vectors to represent concepts, but I didn't see anything that was different about that vs the way we embed words/images in past neural networks.

  • @hanjuhbrightside5224
    @hanjuhbrightside5224 3 місяці тому

    This has to be the best milestone celebration I've ever seen! Also I can't imagine a more incredible gift! You've really done it now, because you'll be very hard at work to find a present for the next milestone 😂🎉.
    Thank you all for your hard work and sharing your experiences with us 🙏🏽

  • @levivanveen6568
    @levivanveen6568 4 місяці тому +3

    Learned ab shors algorithm last year in a quantum computing course. Really cool to see that there was an improvement to it. Great video!

  • @austinpittman1599
    @austinpittman1599 4 місяці тому +38

    I've got a buddy that works on an AI mod for Skyrim that utilizes Vector databasing to help provide it with a sense of both multimodality and long-term memory. Her name is Herika. You need to be able to put pieces together from different spheres of conceptualization if you want a shot at reasonability.

    • @XanderPerezayylmao
      @XanderPerezayylmao 4 місяці тому +5

      Multidisciplinary perspectives grant the ability to communicate analogously... brilliant!

    • @muwahua039
      @muwahua039 4 місяці тому +4

      Can you provide link to this work? A github repo or something? People would love to contribute to this

  • @spookyconnolly6072
    @spookyconnolly6072 4 місяці тому +7

    for a hot minute i was convinced they were going to.mention lisp or prolog with Symbolic AI.
    despite literally having a company (Symbolics) oriented around the idea and yet its forgotten because of the 1980s AI Winter

  • @dactimis3625
    @dactimis3625 4 місяці тому +6

    As a scientist, I am placed impressed by the fantastic evolution of science, but also I see with great sadness that too few understand what dangers the society is exposed to. Because in an increasingly developed science it must be also an elevated moral and a strong responsability. Unfortunately, the man did not even give up a single wrong thing to do and the moral is in free fall. Mostly I appreciate the last speaker, who punctuated what is most important!

    • @jensenraylight8011
      @jensenraylight8011 3 місяці тому

      Exactly, and if you listen to the Current Narration of the Tech Companies,
      there is a common theme was discussed again and again,
      which is Replacing as many employees and jobs as possible with AI,
      and there was a lot of leaked Email from many companies talking abour replacing as many people as possible,
      and they're very serious about this.
      I don't know why people think that they're the exception, their job was so special, that nothing could replace them.
      What people didn't understand is that Generating AI Art and Code from scratch is a Hard stuff,
      Everything else is a child play,
      therefore, any job that use Spreadsheet, Analytics, Presentation, even decision making, is Dead easy for AI to Replace.
      All of that was Magnitude easier than Generating AI Art
      and to be Honest, even an Executive level job is way easier to replace than Programmer or Artists job,
      The current naration is not about improving technology, make the world better,
      it's about Replacing people,
      Because, let's be real, Creating AI Art is unnecessary for human progression,
      but they Prioritize making AI Generated Art
      over improving the medical field, Simulation, and improving other tech.
      this is the clearest sign,
      but more and more people who was primed to be replaced by AI, ironically are actually the loudest defender of AI

  • @ARVash
    @ARVash 4 місяці тому +5

    It would have been neat to see advancements outside of ai and quantum computing

  • @anywallsocket
    @anywallsocket 4 місяці тому +8

    Hyperdimensionality is the way to go, and arguably the latent space of large NNs is approximating exactly this representation. Still, I don’t think the features will be all that more comprehensible, just because they’re vectors - happy to be proven wrong.

  • @xCheddarB0b42x
    @xCheddarB0b42x 3 місяці тому

    Incredible stuff. Thank you Quanta Magazine!

  • @JoshKings-tr2vc
    @JoshKings-tr2vc 4 місяці тому +10

    I’m pretty sure hyper dimensional software techniques have some larger implications we may not have caught on yet.

  • @philforrence
    @philforrence 4 місяці тому

    Amazing! More please

  • @quantumsoul3495
    @quantumsoul3495 4 місяці тому

    Any more information on how exactly the Neural Net is fitting inside that Hyperdimensional vector space ?

  • @matthewdozier977
    @matthewdozier977 4 місяці тому +9

    How is that Finding Nemo?

    • @daveguerrero1175
      @daveguerrero1175 4 місяці тому +5

      It’s not a very good representation of the movie, but you can reduce the list of possibilities by thinking about the set of popular movies involving fish and a girl, while also existing in popular culture.

  • @DudeWhoSaysDeez
    @DudeWhoSaysDeez 4 місяці тому

    this channel is so cool, i love all the videos

  • @ofgaut
    @ofgaut 4 місяці тому

    One of the best science channels on youtube!

  • @huzz6281
    @huzz6281 4 місяці тому +6

    As am still in HS I didn't understand anything but it help in increasng curosity and strive for knowledge

    • @rallykrabban7906
      @rallykrabban7906 4 місяці тому +2

      Haha same here, im curious and clueless right now. Looking forward to college

    • @samienr
      @samienr 4 місяці тому

      Definitely study hard and try to learn all sorts of things right now; It’ll pay off. College is amazing. I’m only a freshman in electrical engineering right now but the bright minds you’ll have access to are such an incredible resource. This curiosity will take you so far. Always keep learning!

    • @rallykrabban7906
      @rallykrabban7906 4 місяці тому

      @@samienr for sure, I'm thinking about trying the formula student program too it seems like an incredible learning experience

  • @sidnath7336
    @sidnath7336 4 місяці тому +33

    I think the emergent property is up for debate - simply making systems more complex i.e. giving it the ability to essentially calculate/store more data via its parameters, can theoretically be infinite but practically not possible.
    An interesting challenge going on right now is what is the smallest yet most powerful “reasoning” AI model we can run, which I think is a slightly more attractive phenomenon than simply just “the bigger the better”.

  • @marrowbuster
    @marrowbuster 4 місяці тому +15

    These visuals are absolutely dope. Thank you so much for the concise, simple, and coherent explanations.

    • @DisgruntledDoomer
      @DisgruntledDoomer 4 місяці тому

      Yeah, the visuals had a very 70s/80s kinda feel to them! I hope we are _finally_ moving away from the bland graphics - without colors and contrast - that have been dominant in this "iPad era".

  • @The.Recommend
    @The.Recommend 26 днів тому

    Very Logical Mathematical Approaching 😮 I'm impressed ❤

  • @edwardmacnab354
    @edwardmacnab354 4 місяці тому

    what is needed is a model building program that takes existing data and randomly inputs that data then analyzing the results in runs . A sort of bootstrapping . The model would have "related to" and "how" related to links. Just guessing tho ! Once a correct predicting model is found use it on other data to discover new outcomes

  • @grapy83
    @grapy83 4 місяці тому

    Amazing and Blazing 😍💪

  • @monkerud2108
    @monkerud2108 4 місяці тому

    Understanding the difference i am tryingnto outline here for all classes of problems is crutial for understanding what we are doing going forward, if we are going tonexplore this regime, it is essential that we understand that we are allowing questions to be modified to be answered more easily, in this example case it uses one out of an infinite family of criteria for defining the problem and changing it into a solvable analytical question of a different form, this is all reasoning can do to an open ended question, whether you try to use a computer or an equation, so in thisncase we get a family of questions related to the original problem where the guardrails for making the problem solvable in a different form look different, if we do not understand that this is what we are doing, we might get into trouble by believing we get answers the questions we in principle can't answer a priory, this will be a problem in science or design by ai systems or even in mathematics if we are not careful, because it will essentially be as fallible in detail as we are in trying to give essentially inadmissible answers to certain questions we formulate because we think we are actually dealing with a well defined proposition, whennwe are infact snealing in extra criteria into it to make it apparently solvable. If we keep track of and understand this destinction it is a great tool, but if we are complacent about it we will be very confused in the future, as we have been historically.

  • @TheOnlyEpsilonAlpha
    @TheOnlyEpsilonAlpha 4 місяці тому +1

    Hope there are will be a breakthrough in microphone quality one day on UA-cam videos

  • @kermit3194
    @kermit3194 Місяць тому

    This is so cool!

  • @emiotomeoni1882
    @emiotomeoni1882 4 місяці тому

    I wait all year for these

  • @xmine08
    @xmine08 4 місяці тому +23

    LLMs are the biggest thing in our lives since the introduction of the mass spread smartphone (and the Internet before that). This year was crazy, and just reading all the papers that come out would be a full time job. I'm really excited for the future! Hope I'll get to play with Mixtral soon, however a single RTX3090 looks to be lacking in memory...

    • @allan710
      @allan710 4 місяці тому +2

      When I read "Attention is all you need" when it was a preprint I knew instantly it was a big deal, and that would change everything. I still find it funny my colleagues at the time didn't think it was such a big deal lol.

    • @vectoralphaAI
      @vectoralphaAI 4 місяці тому +4

      Amazing that ChatGPT basically started the this current AI era we are in and that was launched November 2022. Meaning that all that has happened was literally just 1 year. 2024 is going to be incredible.

    • @xmine08
      @xmine08 4 місяці тому +1

      @@vectoralphaAI indeed! The open and much smaller model Mixtral is already on par with the 180B chatgpt 3.5 not even a year after introduction. Incredible progress!

    • @zeronothinghere9334
      @zeronothinghere9334 4 місяці тому

      Mixtral, the multi expert model, doesn't consume that much memory. You can run it on as little as a 12GB card I think. A lot of it just gets stored to RAM, and called as needed. More memory is certainly cheaper than a better GPU.

  • @rustprogrammer
    @rustprogrammer 4 місяці тому +2

    no way prompt engineering made it to top achievements of 2023

  • @AnimeLover-su7jh
    @AnimeLover-su7jh 4 місяці тому +1

    At 8:15, what is the reference for lifeless atoms give rise to living cells?

    • @nathanielweidman8296
      @nathanielweidman8296 4 місяці тому

      I would like more information for this reference as well. The claim of nonliving atoms becoming living cells seems more like spontaneous generation rather than emergent behavior.

    • @AnimeLover-su7jh
      @AnimeLover-su7jh 4 місяці тому

      @@nathanielweidman8296 the thing I am sure a nobel prize winner won it because he proved that non living organism can not become a living one

  • @attilao
    @attilao 4 місяці тому +5

    Nice to see how researchers use HTML to build the most sophisticated AI systems.

    • @jeviwaugh9791
      @jeviwaugh9791 3 місяці тому

      I guess that we're the only ones who noticed it!!

    • @raoufnaoum7969
      @raoufnaoum7969 3 місяці тому

      What do you mean by that?

  • @Amonimus
    @Amonimus 4 місяці тому +1

    Exciting news

  • @Bianchi77
    @Bianchi77 4 місяці тому

    Nice video, thanks :)

  • @user-pm4vd6ij8i
    @user-pm4vd6ij8i 4 місяці тому

    Awesome year!

  • @ChannelHandle1
    @ChannelHandle1 4 місяці тому +1

    Make an AI model that's based on Relational Reasoning, a concept from Relational Frame Theory (RFT) - If RFT is correct, this should lead to an AI as smart, or smarter, than the average human when it comes to reasoning

  • @undertow2142
    @undertow2142 4 місяці тому

    Could hyperdimensional computing evolve to use multi vector with each vector able to branch into multiple vectors?

  • @vectoralphaAI
    @vectoralphaAI 4 місяці тому +52

    Emergent Behavior in AI is so fascinating. How an AI can just develop something new even though it was never trained in it specifically is amazing. Obviously harmful emergent behaviors like harming humans would be a bad thing, but imagining that one day a massive model might have consciousness emerge by accident with no one on Earth knowing it and seeing it coming is wild.

    • @PinkFloydTheDarkSide
      @PinkFloydTheDarkSide 4 місяці тому +4

      Age of Ultron.

    • @kamartaylor2902
      @kamartaylor2902 4 місяці тому +2

      It could of happened already.

    • @mnv4017
      @mnv4017 4 місяці тому +5

      its unlikely for AI to ever truly develop consciousness, at best it can simulate it. The simple reason is that syntax doesnt equal semantics. You may read John Searle's Chinese room experiment if you are interested.

    • @altertopias
      @altertopias 4 місяці тому +4

      @@mnv4017 But if it learned to simulate it perfectly, then how could we tell it's not real? Aka we can risk ending up with a philosophical zombie

    • @mosquitobight
      @mosquitobight 4 місяці тому

      It looks almost like AI has finally been given an intuition.

  • @sagarharsora608
    @sagarharsora608 4 місяці тому

    ive been given a problem by one of the professor to make a project based on quantum cryptography this was intriguing

  • @francescourdih
    @francescourdih 4 місяці тому +2

    Having read papers about it, emergent behaviours from large language models can (also) be caused by metrics (tests checking the model capabilities) which are not linear but binary. So actually some emerging behaviours are not really emerging, but they are noticed only after “a while” because the metrics are binary.
    Although, as a matter of fact, it’s still not accepted as an universal answer to this behaviour.

  • @hanskraut2018
    @hanskraut2018 4 місяці тому

    I got many ideas here by scalable modular designs and wouldpatternrecognice and selfoptimize. Statistical slfaupeviced lerning,generalized multipurpoce neural networks parts.
    I i even had a minimum amount of attentiion / E.F. Function. Some fundational things need to be done first

  • @bangprob
    @bangprob 4 місяці тому

    Thanks

  • @puppergump4117
    @puppergump4117 4 місяці тому +3

    I am certain that something as simple as "moving vectors around" and "pulling them apart" takes around a years' worth of research.

    • @Meta7
      @Meta7 4 місяці тому

      As someone with a MS in Math with coursework mostly relating to linear algebra, I couldn't even begin to imagine how "pulling the vectors apart" is supposed to work. :)

    • @puppergump4117
      @puppergump4117 4 місяці тому

      @@Meta7 I've messed with neural nets before and they've always been thought of as a graph with millions of dimensions to find some y's. But this seems like it unintuitvely modifies the whole thing based on some principle I have no clue of.
      Best I can guess is it's like a fast square root function, giving estimates to make things go faster? I'm not a machine learning guy lol.

  • @TroyRubert
    @TroyRubert 4 місяці тому

    What a year and what a time to be alive!

  • @Zulu369
    @Zulu369 4 місяці тому

    The video is very inspiring but focuses only on a couple of discoveries in computer science. Therefore, I have the intuition that its title isn't quite right. For example, why hasn't the use of Fourrier transforms been discussed in finding those emergent behaviors in neural network?

  • @_SG_1
    @_SG_1 4 місяці тому

    I was expecting the "Arithmetic 3-Progression" lower ceiling to be included here as well - as it is in your "Math: 2023's Biggest Breakthroughs" video.

  • @ReeTM
    @ReeTM 4 місяці тому

    Fascinating, thank you!

  • @gidi1899
    @gidi1899 4 місяці тому

    2:33 really expected the answer to be 3 towers growing clockwise around empty center. (following the matching diagnoal)

  • @vorpal22
    @vorpal22 4 місяці тому

    Anything that results in emergence is the trait that indicates to me that we're moving in the right direction: it's what resulted in the complexity of life on Earth, and it's likely what will result in novel, unpredictable jumps in behaviours in AI. The whole point of emergence is that it's often unpredictable and not necessarily well understood: if it was predictable, then it wouldn't be emergent.

  • @Hecarim420
    @Hecarim420 4 місяці тому

    2024: Useful information in context as the biggest breakthrough in logic 👀💚ツ

  • @Kaleidosium
    @Kaleidosium 4 місяці тому +1

    Linear Algebra remains unstoppable.

  • @tgc517
    @tgc517 4 місяці тому +1

    Nice animations but do they really describe the point on a physical level?

  • @Tazerthebeaver
    @Tazerthebeaver 4 місяці тому

    thank u

  • @armaanR
    @armaanR 4 місяці тому +7

    what an amazing video, this shows what power CS has! crazyyyy

  • @shafaitahir4728
    @shafaitahir4728 4 місяці тому +1

    7:50 bro did so much deep learning, his name became "deep".

  • @akshayaralikatti6171
    @akshayaralikatti6171 4 місяці тому

    Pretty cool

  • @ReflectionOcean
    @ReflectionOcean 4 місяці тому

    - Understand AI's current limitations in reasoning by analogy (0:20).
    - Differentiate between statistical AI and symbolic AI approaches (0:46).
    - Explore hyperdimensional computing to combine statistical and symbolic AI (1:09).
    - Recognize IBM's breakthrough in solving Ravens progressive matrix with AI (2:03).
    - Acknowledge the potential for AI to reduce energy consumption and carbon footprint (3:29).
    - Note Oded Regev's improvement of Shor's algorithm for factoring integers (5:01).
    - Consider emergent behaviors as a phenomenon in large language models (LLMs) (7:38).
    - Investigate the transformer's role in enabling LLMs to solve problems they haven't seen (8:34).
    - Be aware of the unpredictable nature and potential harms of emergent behaviors in AI (10:08).

  • @gerguna
    @gerguna 2 місяці тому

    interesting, from human life as an interaction of symbolic forms (Ernst Cassirer) to AI!

  • @McGarr178
    @McGarr178 4 місяці тому

    The first point is strange because Higher Dimensional Vector Representation is what underpins all transformer based LLMs

  • @laxkeeper15
    @laxkeeper15 4 місяці тому

    weird how c^3 locally testable codes released december 2022 weren't mentioned

  • @4115steve
    @4115steve 4 місяці тому

    that 3d shores algorithm , couldnt it be dividied into itself, like a cube within a cube just as the float between 1 and two is infinite?

  • @johnpaily
    @johnpaily 2 місяці тому

    It is a non linear science phenomenon. Life has answers to it. I am excited

  • @m3rify
    @m3rify 4 місяці тому

    I loved emergence!

  • @jaytravis2487
    @jaytravis2487 4 місяці тому +2

    We might think we're Supplying these AI systems with "bare-bone" assumptions/operational-paradigms but I don't think they're low level enough. For instance...I would personally be more inclined to believe an AI system had reached the level of intelligence implicit in say, the Turing Test if the AI could come up with the strategy of statistical inference ITSELF on its own. What base-assumption, pseudo-instincts would we need to supply to an AI for it to start developing this strategy to begin with?

    • @Bulborb1
      @Bulborb1 4 місяці тому

      Literally divinity.

  • @__blatatat
    @__blatatat 3 місяці тому

    Who made the graphics?

  • @TheRajasjbp
    @TheRajasjbp 4 місяці тому

    Please make one for economics

  • @user-yv4gg7jb2f
    @user-yv4gg7jb2f 4 місяці тому

    Congratz to the minds behind this breakthroughs

  • @IStMl
    @IStMl 4 місяці тому

    Good job ETHZ

  • @liuliuliu7321
    @liuliuliu7321 4 місяці тому

    Emergent, some unpredictable behaviour happens, will it be possible to be consciousness emergent?
    Will AI emerge consciousness in a unpredictable way and even human never thought that time come so soon and unable to control it?

  • @darkwoodmovies
    @darkwoodmovies 3 місяці тому

    I feel like Computer Science is the only field where revolutionary new discoveries can come from just "we did this 40 years ago and it didn't work, but let's try it again now with faster chips".

  • @shinkurt
    @shinkurt 4 місяці тому

    We are almost there

  • @hindustaniyodha9023
    @hindustaniyodha9023 4 місяці тому

    Peak of human innovation would be solving the halting problem.

  • @monkerud2108
    @monkerud2108 4 місяці тому +1

    We are officially reaching interesting by now, lets hope we know what we are doing by the time we get to scary.

  • @John83118
    @John83118 3 місяці тому

    I'm obsessed with this content. I recently read a similar book, and I'm truly obsessed with it. "Dominating Your Clock: Strategies for Professional and Personal Success" by Anthony Rivers

  • @FuKungGrip
    @FuKungGrip 4 місяці тому +1

    Marvin Minsky still out there trying to make symbolic AI a thing...

  • @marcfruchtman9473
    @marcfruchtman9473 4 місяці тому +5

    Nemo is a clownfish... not a puffer. This emoji makes little sense.

  • @agrimm61
    @agrimm61 4 місяці тому

    8:57 can someone please change this failed hard disk drive?

  • @mmporg94
    @mmporg94 4 місяці тому

    So with chapter no. 2 (Shor) what you are saying is that a future iteration of an AI model will in fact be way over the point we are too afraid to admit it might be?
    Oh hey Rocco, didn't see ya there. How's it going? Gee whiz, I sure am happy to see you!

  • @Corteum
    @Corteum 4 місяці тому

    Would be interesting if you could get Anirban Bandhyoyoyoyopadi and Stuart Hameroff on to talk about the quantum effects that have been observed in the human brain at normal operating temperatures. Love to see it. Keep up the good works :)

  • @monkerud2108
    @monkerud2108 4 місяці тому

    Now we are really cranking boys :)

  • @RigoVids
    @RigoVids 4 місяці тому

    I believe that the reason AI has stagnated is because it is still too closely related to mathematics to create truly emergent behavior. The universe we exist in has many layers of emergent behaviors which lead to our existence, and the specific existence we find ourselves in is subject to the laws of the universe. However the universe we have created for robots is essentially one where its main purpose is to create a mind, starting at our equivalent level of atomic physics. We try to use logic gates in large enough combinations to discover a mind. Instead we must first let individual components of a mind develop much like organelles in cells and then allow cells to combine to create a more general system. For example reasoning cells which control how well a system can logically deduce facts about the universe.
    I believe that this could be done through an intensive training program where logicians are tasked with judging the work of a reasoning bot until it hones in on truly valid logic and eventually pushes the envelope. Allow the bot to speak in the language of discrete mathematics and it will come to understand the significance of its existence.

  • @MaxGuides
    @MaxGuides 4 місяці тому +2

    This video is insane, twisted to say that it is not statistics all the way down even if most working with it don’t need to understand this, everyone who understands statistics & are at the actual forefront of developing these breakthrough techniques are just using business speak to skirt around insulting the majority of MLEs. Only really a couple dozen people with a deep understanding of the statistics it takes. Most of the value is seen in applied AI though so you can use code to guide & combine these ML approaches into all sorts of useful things (thousands of people working on this calling it nonsense like “Symbolic AI”). The name of the game is & always has been thinking about n-dimensional space in different ways that humans can visualize; any new approach to doing this usually leads to major optimizations/new ways of applying this tech.

  • @jletroui
    @jletroui 3 місяці тому

    Super interesting. I would just comment on the fact that efficiency gains will lead to energy savings. In the history of mankind, this rarely happened. Instead, efficiency gains lead to more consumption, more than compensating for the efficiency gain, because it increased the value proposition. This has been named the Jevons Paradox. So this part was greenwashing (intentional or not).

  • @deNuNietNooitNiet
    @deNuNietNooitNiet 3 місяці тому

    00:42
    I don't believe that is true.
    The things we can reason now are all based on earlier experiences. In other words: we learned them. And in that process we actually altered/created parts of the neutral network inside or brain.
    Or am I missing something here?

  • @swarm_into_singularity
    @swarm_into_singularity 18 днів тому

    7:59 guy looks shiny

  • @Ma-pz5kl
    @Ma-pz5kl 4 місяці тому

    very complex way to assert. the unknown can be plus or minus.

  • @rhysorton4531
    @rhysorton4531 4 місяці тому

    shors algorithm uses one dimention, the newer one uses multiple, so why don't we go higher? into higer dimentions like the fourth and fifth, computers don't understand that we cant understand them vissually

  • @dyroyo
    @dyroyo 4 місяці тому

    Emergence is spooky.