Where's the AI Boom Going

Поділитися
Вставка
  • Опубліковано 1 лис 2024

КОМЕНТАРІ • 463

  • @AnastasiInTech
    @AnastasiInTech  2 місяці тому +21

    Check out Babbel and save up to 60% on your subscription today:
    bit.ly/AnastasiInTechAug

    • @giorgiogiacomelli6932
      @giorgiogiacomelli6932 2 місяці тому +1

      Brava, Anastasia! Ottima pronuncia 😉

    • @nemesisk79
      @nemesisk79 2 місяці тому +1

      Since Italian isn't so widenly speaken around the world I usually think people would go for other languages. I 've beeb thinking about learning a some mandarin for myself too...
      In bocca al lupo con il tuo studio. ;)

    • @asbecka
      @asbecka 2 місяці тому

      @AnastasiInTech do you know if any one is doing any research in AI fusion? I.E. like sensor fusion, fusing the data from different A.I. datasets into one such as fusing audio, video and others, perhaps also using for self identification/awareness.

    • @MaverickSeventySeven
      @MaverickSeventySeven 2 місяці тому

      Amazing presentations as usual - However, can not get my simple, non-tech mind around "oscillation" - meaning, the constant presentation, graphically, or by devices of a "signal" shown as a wave with a frequency - why not eg., that particle going in a straight line!? Meaning, to avoid the obvious, what makes any particle "go up and down"? Surely, that particle being emitted is amongst millions, or even billions - whilst contained in a wire or a fibre optic connection that is understandable that it oscillates against the 'barrier' but when, eg. a match is lit, does light - proton? - oscillate "up and down:? Or does it act in a spiral manner as it proceeds out? - Or, at a sub atomic level, just spin on an axis when 'excited' to pass its 'spin' to the next particle etc.?
      An explanation will be too long of course to convey here but I do have sleepless nights over it.....:-) Switching the bedside light bulb on.....Given the near spherical shape, where is the "Oscillation"!!!??? :-)

    • @achalshahi8595
      @achalshahi8595 2 місяці тому

      Tolles Video

  • @HanzDavid96
    @HanzDavid96 2 місяці тому +81

    A 9b model can do more now than a 175b model could do back in 2020! This is at least a factor 20 efficiency improvement in 4 years!

    • @JK-xx5ns
      @JK-xx5ns 2 місяці тому +9

      Yes, we have optimized inference a lot by quantization and other techniques, but training has not improved nearly as much

    • @Arthur-jg4ji
      @Arthur-jg4ji 2 місяці тому +5

      @@JK-xx5ns it has because the more efficient we are the better we are. Look at the first iteration of GPT 4, it was really not as good as GPT4o today

    • @HanzDavid96
      @HanzDavid96 2 місяці тому +4

      @@JK-xx5ns Yes we need systems that can think and reflect about there data to generate new better data. It's more about the quality of data. Waiting for the AlphaGo reinforcement learning effect. We have 70b models now that act as good as gpt4 vanilla from mid 2022 with its over 1 trillion parameters. That means there is a lot of unused potential in the bigger models if we can compress the abilitys so well. And that means the training data is currently likely of to low quality ;)

    • @squamish4244
      @squamish4244 2 місяці тому +2

      @@JK-xx5ns It makes it much cheaper, much less energy and storage space and hence a lot more accessible. The integration phase is when it actually starts to change society unless you're a giant corporation.

    • @Markoss007
      @Markoss007 2 місяці тому +7

      Also context length from 4k token to 35K and then to 130K and 1mil, 2mil and possibly 10mil. In only 2 years. There was no image generation, video or audio in 2021. Dall-e came out in april 2022. ChatGPT in november 2022. 2 years in november, and they just say that AI is dead. I want to see this people monthly or yearly work. What they achieved in 1 year.

  • @AdvantestInc
    @AdvantestInc 2 місяці тому +60

    I appreciate how you not only highlight the excitement around AI but also provide a realistic view of the challenges ahead. Your deep understanding of both the technology and the market dynamics makes this one of the best analyses I’ve seen on the topic.

    • @ChrisHereToday
      @ChrisHereToday 2 місяці тому +4

      Agreement, great stuff

    • @aaroncarney7733
      @aaroncarney7733 2 місяці тому +1

      Current research which will likely have a breakthrough soon will be video based AI.

    • @aaroncarney7733
      @aaroncarney7733 2 місяці тому

      I'm not biased at all ...

    • @Killmonger234
      @Killmonger234 2 місяці тому

      This actually sounds like a bot

  • @AndreaVitiani
    @AndreaVitiani 2 місяці тому +21

    As an Italian: I love you your Italian is great! ❤

    • @glike2
      @glike2 2 місяці тому +3

      I have Belgian relatives living for a long time in Italy so I appreciate it also.

  • @aldarrin
    @aldarrin 2 місяці тому +58

    That Gartner chart tracks "hype" not technical capability.

    • @GoodBaleadaMusic
      @GoodBaleadaMusic 2 місяці тому +3

      Calling everything AI "hype" is becoming an obvious projection of ones fears at this point. You should learn to interface with the planet for food soon.

    • @aldarrin
      @aldarrin 2 місяці тому +3

      @@GoodBaleadaMusic I'm guessing you're not a Gartner customer. Gartner puts out charts like that for most new tech. Hype to them doesn't mean it's bad, but just there's a mismatch between perception and capability. They've been at it decades and are very useful if you want to plan for IT infrastructure. Not sure if your last sentence was a threat or not, but I'll be just fine.

    • @GoodBaleadaMusic
      @GoodBaleadaMusic 2 місяці тому

      @@aldarrin Ok as long you're you're arguing from that niche and not as the window lickers watching the world leave them behind. Thats like 90% of the of thumbs up on your comment.

    • @4arrows4all
      @4arrows4all 2 місяці тому +1

      I wonder to what extent it indirectly also influences investment which in turn influences development.

    • @GoodBaleadaMusic
      @GoodBaleadaMusic 2 місяці тому

      @@4arrows4all Sometimes you gotta smile to feel happy.

  • @jamesnobles1
    @jamesnobles1 2 місяці тому +2

    Ok Anastasi, I absolutely love you and FINALLY subscribed after being a devoted fan for a few years now. You work so hard with all of the time put into your research and provide such informative videos that are really valuable when analyzing and comparing the current state of technology to innovations in achieving breakthroughs. Nobody provides such keen insight like you do so I am grateful for your presence. You have a unique perspective, thinking different in a world filled with so much hype. Its your natural curiosity that allows you to compare what you learn to what you know. You then strip away the biases to get to the meat of things; understanding the impacts of how these emerging technologies can truly impact the lives of so many people in ways that can change them forever. You have a good heart and so this all shines from a wholesome place within you. It is refreshing and is how you got me. You really care about the betterment of mankind and get really excited when things are truly groundbreaking. Plus, you never have said, not even once, the most annoying phrase people use in making announcements today..."I am...so...excited...to be...sharing...this...news...with...you." Yeah, I can't finish a video after hearing that insincere and highly annoying sentence. Hopefully it will be a trend in our tech industry presentations that QUICKLY fades away since stealing from Apple is no longer innovative or the standard. That is totally not your style at all as you come across with genuine intention and explain things in a way to where we all can understand them. Plus, you are going for your MBA in Italy, I am so impressed! I was raised in the wine industry in Texas, still at it so many years later because I love science and psychology; how science can affect our senses and change the psychology of how we think. How we can survive impossible odds, treacherous environments with patience and ultimately innovation, bringing us through struggle to the breakthrough on the other side and bringing people together along the way. I loved my travels through Italy, the wine, the people and always, even more, the science! My inner nerd keeps me curious much like yourself. You are beautiful and amazing. I truly value you and what you do. Thank you! I look forward to getting my first notification! Your friend...James :-)

  • @ronaldcraig4166
    @ronaldcraig4166 2 місяці тому +3

    Always catch your videos, thank you for the update!

  • @optimagroup11
    @optimagroup11 2 місяці тому +5

    Wonderful perspective! Best of luck with Italian and your MBA. Hope you can continue your channel. We've all benefited immensely. SoCalFreddy

  • @darwinboor1300
    @darwinboor1300 2 місяці тому +7

    Thanks Anastasi,
    I wonder if we are missing the iceberg below the water. AI is still being addressed as software rather than as an intelligent interface between entities, knowledge, and, in many cases, the "machines" that manipulate the environment. For efficiency, AI will eventually interface with many machines at the "hardware" level (the equivalent of machine code). In doing so, AI will transform the machines and the code that runs them. For the majority of the population that transformation of machines may become the most visible product of the evolution of AI. Afterall, for most of the population machines are black boxes. AND, AI and the processes that control both AI and machines are "magic".
    Written in my model Y. It just drove me to 3 destinations without human intervention and now will drive me home.

  • @Pill-AI
    @Pill-AI 2 місяці тому +3

    At 6.40 min … yes the AI temperature has dropped so quickly!! … thanks for the great video.

  • @alansmithee419
    @alansmithee419 2 місяці тому +12

    The nature of AI is that it develops rapidly.
    The nature of business is that generational releases are slow.
    AI may get burdened with a repetitive hypecycle where there's seemingly very little development for extended periods of time during which people get bored and say it was just a fad, followed by a massive spike in capabilities which causes a huge rush of interest, only for another year or two to pass with relatively little happening and people start saying it's dead again. Rinse repeat.

    • @VerifyTheTruth
      @VerifyTheTruth 2 місяці тому +1

      The real question is, 'what type of world do they intend to build?'

  • @74Gee
    @74Gee 2 місяці тому +2

    Wow, really super energy in this video, love it!

  • @liriobolaffio3255
    @liriobolaffio3255 2 місяці тому +5

    Ciao, Anastasia! Grazie per il tuo sforzo nell'imparare l'italiano, nonché per la divulgazione di tematiche tecnologiche. Quale lusinga sentirti parlare la lingua di Dante, bella e intelligente donna del fare!...

  • @user-lo4er8wy9l
    @user-lo4er8wy9l 2 місяці тому +7

    It is important to separate the AI hype cycle from the ability to monetize AI. Palantir is good example of a structure that uses the ontology manage multiple AI agents (K-LLM), while producing value for their clients. Also imitation AI will be the norm, while AGI works its way through over the coming years. It's an exciting time to see it unfold in front of our eyes.

  • @ChrisHereToday
    @ChrisHereToday 2 місяці тому +4

    Found your channel, great stuff - looking forward to binge watching.

  • @Dina_tankar_mina_ord
    @Dina_tankar_mina_ord 2 місяці тому +13

    I think the last quarter of this year will proove if the hype is real or not. But there is no doubt that this tech will revolutionize the future, the question is how big of a leap and how fast.

    • @NoX-512
      @NoX-512 2 місяці тому

      The hype is definitely real, but reality is far from the hype.

  • @bando_ciancia
    @bando_ciancia 2 місяці тому +3

    Brava! Continua così, il tuo Italiano è ottimo

  • @AnimusOG
    @AnimusOG 2 місяці тому +4

    Great Job Anastasi!

  • @maximusasauluk7359
    @maximusasauluk7359 2 місяці тому +6

    Awesome video. Important to point out as you said this "hype" cycle is opinion based and although it makes sense in a lot of cases the graph is not necessarily "up to scale". By that I mean, the difference between the hype peak and the valley of disappointment can be less steep than in that image, it can also have a shorter duration than many other technologies and so on.
    Also important to point out that some Human technologies never went through this hype cycle like electricity, internet and penicillin. It's hard to predict if AI is THAT disruptive, although it seems we as Humanity are not even ready for its benefits so probably not.

    • @Me__Myself__and__I
      @Me__Myself__and__I 2 місяці тому +3

      Good points. Also unlikely a lot of other technologies plotted on that hype cycle chart AI has a lot of linked technologies that just recently got attention / serious funding such as custom hardware. Those other technologies could potentially amplify the progress of AI on top of whatever progress is made within AI itself.

    • @raybod1775
      @raybod1775 2 місяці тому

      We’ve been going through the Industrial Revolution hype cycle for 150 years.

  • @WilliamBurdine
    @WilliamBurdine Місяць тому +1

    I am so excited that I came across your channel, especially this video, you backed up what I have been saying for the past few months to people I know... Everyone is freaking out about AI taking over the planet and that soon Cyberdyne Corp will produce the Terminators and convergence will happen and blah blah blah... but as I watched LLM's taper a bit the last quarter, I began to wonder, that they CAN'T be the way and that Hardware MUST evolve again, but not through ASICs, maybe CISC, but definitely a tech that works partly with current AI and and something like the "FOLDING" tech projects for more systematic learning in conjunction with the hardware.... Anyway thank you for this video, I did share it out on my FB page, it was a great lesson. Good luck with your Italian Lessons.

  • @MeowtualRealityGamecat
    @MeowtualRealityGamecat 2 місяці тому +3

    Loved the content, thank you! I think the "wow" factor would return if it was used to improve people's lives in some way. I believe that's just a matter of time.

  • @bjornl2943
    @bjornl2943 2 місяці тому +1

    For myself i continue to be amazed, looking just at image generation, models like Flux and video creation are now consistently nearly photo realistic. It's mind-blowing, i am more excited by the day, waiting for next iterations to drop

  • @jamessderby
    @jamessderby 2 місяці тому +27

    It's the compounding of multiple technologies that's exciting for me. LLMs combined with robotics is what I believe will be the next big thing.

    • @MeowtualRealityGamecat
      @MeowtualRealityGamecat 2 місяці тому +5

      How about vr with ai? Just a thought

    • @jamessderby
      @jamessderby 2 місяці тому +6

      @@MeowtualRealityGamecat Absolutely, AR/VR is high on my list too. That will be improving a lot in the next 5 years, both with making the hardware smaller and enhancing the software with AI implementation.

    • @joechughtai3155
      @joechughtai3155 2 місяці тому +4

      AI and Medical is one I think will be outstanding too. Can you imagine the market for a pill that would change your hair color, and that is just a very minor aspect of the combination.

    • @animusveritatis
      @animusveritatis 2 місяці тому +1

      As an electronics engineering working in industrial robotics I don't think its the "next" big thing. It is absolutely going to be a huge thing, but I think the hardware will severely limit its applications for a bit. When it occurs I think it will be world changing, so depending on how you define "big" It may qualify.
      I think the next big thing will be AI applications to material science. Materials limit us in so many applications, and often 1 type of material fuels civilization for decades. Think of how first steel, then plastics, and now carbon fiber, all evolved from one material with nice properties to many variations for actual applications. Our current AI models are well equipped to be modified into tools that tackle the complexities and huge number of possibilities that exist in material science. AI sorts through the junk possibilities very well, leaving mostly quality results with a few hallucinations to be weeded out.
      I think we are close to a point where Company X needs a material that maximizes properties A, B, C. Hope into MatsGPT, and Boom! got a custom proprietary material for your application. Proprietary materials impact many applications that directly impact advancement in many current industries, which I think is the quality most missing from current AI applications. Although Robots have that quality which certainly makes them a contender, I just think there are a some practical roadblocks that will slow it down significantly. (Motors and power density being two)

    • @piotrjasielski
      @piotrjasielski 2 місяці тому

      Combining LLMs with anything will result in massive mistakes once in a while as hallucinations are still major part of the game.

  • @Alice_Fumo
    @Alice_Fumo 2 місяці тому +11

    That one graph mentioned increased compute since GPT-4, but I don't think any of the models since then were actually larger. This is what makes people think it's plateaued, but nobody seems to have yet dared to spend >1B on a single training run.
    THAT is when we see whether we have plateaued in any meaningful way.

    • @Me__Myself__and__I
      @Me__Myself__and__I 2 місяці тому +8

      Absolutely correct. This is why this analysis of hers is flawed. There isn't anything to compare GPT 4 to yet because none of the AI companies have come out with their next-gen models yet.
      OpenAI specifically said they were going to wait awhile to start work on 5 after releasing 4, they only recently stated that they were working on 5 now. All of the other companies have come out with new models, but they were playing catch up to GPT 4 and only very recently (weeks in some cases) caught up to Open AI. So we haven't see a single GPT 5 class model yet trained using all that newly purchased expensive hardware. Until we see that no one can say with any authority if progress has plateaued or looks exponential.

    • @kazedcat
      @kazedcat 2 місяці тому +2

      No one builds a magnitude larger model because there is no hardware to train them. At least no hardware that is economically practical. The platue is caused by hardware limits.

    • @Me__Myself__and__I
      @Me__Myself__and__I 2 місяці тому +1

      @@kazedcat Close but not entirely correct. Getting enough hardware for significantly larger models has been a problem. But that is where all that investment cash has gone, they have been building new data centers to train the next gen models. For example xAI's new data center for training just came online. It takes time to build new data centers and then it takes time to put them to use training the new models. It is in progress. Both Open AI and xAI have stated recently that they are now working on training next gen models.

    • @kazedcat
      @kazedcat 2 місяці тому

      @@Me__Myself__and__I It is not enough if they want an LLM with tree search. To train that kind of LLM they need 1000X more compute. Yes they are spending a lot of money for new hardware but doubling your compute only gives you an AI with 10% lower perflexity and without planning abilities which needs tree search architecture.

    • @Alice_Fumo
      @Alice_Fumo 2 місяці тому

      @@kazedcat ​ This tree search you mention would be primarily a technique at inference-time. It would not increase amount of training data needed.
      If they implement the techniques outlined in "exponentially faster language modeling", selective attention and use ASICs, they get 100-300x speedup, O(n*log(n)) context complexity and 20x speedup respectively.

  • @SheilaMink-c2t
    @SheilaMink-c2t 2 місяці тому +1

    Thank you for this thought-provoking video. I hope everyone is having a great day. Sheila Mink in New Mexico

  • @scottwatschke4192
    @scottwatschke4192 2 місяці тому +3

    I've seen a couple of companies are working on self improvement for the AI think that'll bring it one step closer to Agi.

  • @peterweller8583
    @peterweller8583 2 місяці тому +1

    Kudos on you for your channel.

  • @dr.mikeybee
    @dr.mikeybee 2 місяці тому +4

    AI improvements will mostly come from building better agents and larger context windows. We need to handle context assembly much better. If our agents can do that, and we capture how people learn to use them, machines can create synthetic training data from in-context learning.

  • @Scripter_story
    @Scripter_story 2 місяці тому

    Enjoyed your video. Great, as always. With thanks from Scripter.

  • @markldevine
    @markldevine 2 місяці тому +2

    Timely content.

  • @lynntatro7374
    @lynntatro7374 2 місяці тому +2

    There one area where AI really benefits humanity is in speech production by deciphering electrical patterns on the surface of the patient's brain. When I first saw this on television, it was simply amazing.

  • @warsin8641
    @warsin8641 2 місяці тому +2

    Mixture Of Experts!

  • @PaulIsTheBestest
    @PaulIsTheBestest 2 місяці тому

    you're a smart lady, i'm a generalist polymath myself, so i can appreciate breadth of knowledge

  • @gilbertomendes165
    @gilbertomendes165 2 місяці тому +1

    When I have start using AI/LLMs, I almost did not sleep for a week until I understood how it worked... now its been more then 1 year that I dont use it and dont miss it!

  • @okman9684
    @okman9684 2 місяці тому +8

    Pro tip: You can train a voice AI with you voice and run it in your preferred language like Italian and you can compare it with your own accent in that language. It can help in increasing proficiency 🤓

  • @pazitor
    @pazitor 2 місяці тому

    Thanks, Anastasi. Very interesting. There are areas of additive manufacturing that have been through this, and some are deep in despair. Our company works on what we hope are some enlightened uses for productive ends in one or two areas.

  • @adanjsuarez
    @adanjsuarez 2 місяці тому +1

    "It is hard to make predictions, specially about the future" :D you are the best! :D

  • @petreraldiavideos
    @petreraldiavideos 2 місяці тому +3

    There are some pretty strong points to argue against your thesis. The truth is no one really knows yet. The next frontier models in 2025 will clarify everything

  • @pawelczubinski6413
    @pawelczubinski6413 2 місяці тому +1

    i think scaling should be logharitmic, it's like to add one more abstraction layer to understanding you need x times more data as top layer need more variety

  • @drednac
    @drednac 2 місяці тому +7

    I don't know how did these estimations, but they are surely wrong. Even if we discard any future improvements the state of the art LLMs will massively disrupt the industry. I use them every day and I am maybe ~3x more productive (as a software engineer). I also do graphic design the improvements in productivity there are even bigger. The AI only needs to get marginally better to replace most of the jobs. This improvement will probably be achieved using new algorithmic improvements, no extra data or compute needed.

    • @Steamrick
      @Steamrick 2 місяці тому +3

      As someone working for an IT service provider, I've had near zero impact. They aren't nearly good enough (and trustworthy enough) yet to take over helpdesk functionality and they fail to adequately solve the problems that see me struggling in 2nd/3rd level support. Answers to complex problems tend to be a halluciogenic mix of correct information and various levels of wrong.

    • @drednac
      @drednac 2 місяці тому +3

      @@Steamrick The average adoption cycle is 7 years, also current AI is obviously not a human level, but it doesn't have to be. The thing is it's not good enough until it is, it's like a crossing a line. It will one day suddenly be good enough and it doesn't have to get that much better. I wasn't using LLMs at all just few months ago, because it wasn't good enough for me, until it was.

    • @ciekawki6574
      @ciekawki6574 2 місяці тому

      @@drednac I've got mixed feelings. Chat gpt does help me a lot with programming, solving problems, or explaining various IT terms and concepts. But Ai is still nowhere near creating a fully working program without a human having extensive programing knowledge

  • @DoubleFaceReal
    @DoubleFaceReal 2 місяці тому +1

    you did very well" - Sei andata veramente bene!

  • @ahsanmohammed1
    @ahsanmohammed1 2 місяці тому +1

    Your hand gestures are already 100% Italian. 🙂
    Thanks for the info btw. Appreciated.

  • @gordonelliott7870
    @gordonelliott7870 2 місяці тому

    Ms. Anastasi, this is the episode that caused me to click "subscribe", because you are recognizing things that are outside the general rush to judgment in this field, and touched on several important issues in ways I think accurately reflect reality. Evolutionary processes took millions of years to experimentally configure many kinds of biological intelligences. The custom structure of each, with slight variations in each individual within "species" is highly complex with 100s, possibly millions of substructures to allow those organisms, even including humans with our broad capabilities, to exercise what we call "general intelligence". You can't just stack 2,4,8...or even thousands of brains together to get a "super" intelligence. No more than putting 10,000 scientists in a room gives exponentially faster results compared to 100. Successful methods will involve painstaking development which won't be exponential.
    That said, there are capabilities that can grow exponentially with compute power -- if indeed Moore's law is even continuing to accurately model computing trends. Destructive uses of AI can often expand exponentially because they don't have the constraints of living beings that they need to succeed and reproduce. This is especially true when fed with exponentially growing personal data pools. But AI that people actually desire will have limits as above. Constructing useful AI models will be a hard fought process, dependent very much on human thought. And that will depend upon a huge number of different internal structures to implement different aspects matching or exceeding human capabilities.
    The models suggested here recognize the limitations on growth of AI, and the human psychological response when we see this reality.

  • @ITSupport-q1y
    @ITSupport-q1y 2 місяці тому

    Very interesting, thanks

  • @BrynChat
    @BrynChat 2 місяці тому

    Incredible video Anastasi!

  • @tomholroyd7519
    @tomholroyd7519 2 місяці тому +2

    LLMs are, unsurprisingly, fluent in most computer languages (computer languages have a vocabulary of about 32 words, and fixed semantics, so they are rather easy to encode). I use LLMs all the time to generate code snippets, much like I used to search for snippets. LLMs are a much more efficient way to "search", since it can basically do text-to-code (based on the enormous corpus of source code available), it's already searched for everything so you can just ask it

  • @jbavar32
    @jbavar32 2 місяці тому +10

    The reason Apple dumped Intel chips in favor of creating their own cpu’s is that Intel wanted to create a more generalized cpu for windows customer base and apple needed more specialized operations. So Apple said no thank you, we’ll make our own. As a result Apple’s chips are ten times more efficient and much more powerful than intel cpu solution. I can see why chip makers want to create specialized chips to handle specific algorithms. This could potentially reduce the number of servers as compared to a generalized gpu farm such as nvidea’s h100’s servers. Great show by the way, I always enjoy hearing your observations.

    • @MrHav1k
      @MrHav1k 2 місяці тому

      Intel fumbling... classic.

  • @andre495
    @andre495 2 місяці тому +4

    You are right, when it comes to generating texts, images or films, only the quality needs to be improved, e.g. less hallucinations. But there is 1 exception that only a few people appreciate: Tesla. There, AI is used to control robots: camera images in, steering commands out. AI learns from video clips, now first with robots on 4 wheels, but they are also working on building this technology into humanoid robots called Optimus. They have already come a long way on 4 wheels (search for FSD on UA-cam, Full Self Driving) and be amazed at what they can already do. They have reached the level of a driving school exam candidate. Transferring this technology to humanoid robots will still be a lot of work (let AI watch many other videos), but we already know from their cars that it will definitely work. So here is still a considerable development trajectory with major important results to be expected. Only with regard to Tesla I disagree with your video.

  • @willykang1293
    @willykang1293 2 місяці тому

    About AI scaling law, I totally agree your viewpoint. Because more datum means we need to invest more advanced hardware to process those datum, from which they consume more energy needed. It’s Ike a chain reaction to all the supply chain industries.

  • @BlackHattie
    @BlackHattie 2 місяці тому

    There is a lot of people that know a lot of this... But you know. Nice video. Hype...Anastasi, hype...

    • @BlackHattie
      @BlackHattie 2 місяці тому

      Its evolve of model and disolve of it for cost of evolving new one and evolving neone to get market to disolve again and evollve from then on...

  • @VerifyTheTruth
    @VerifyTheTruth 2 місяці тому

    R-

  • @zelogarno4478
    @zelogarno4478 2 місяці тому

    Thank you😊

  • @kristapsziemelis
    @kristapsziemelis 2 місяці тому

    Dunning-Kruger effect👍👍👍

  • @martinparnell8990
    @martinparnell8990 2 місяці тому

    Ty for your content. I liken AI development to the Electronic switch. First was a vacuum tube. Then we developed Germanium and shortly after Silicon Transistors. And finally the Microprocessor. The first stages growth was limited by the physical limitations of the Tubes. Namely the failure rate of tubes. Once there was enough tubes, you could always expect there to be a faulty tube at almost even given moment. So scaling became impossible for the early vacuum computers. There was a hard limit. Microprocessors suffer from an upper limit. NOt one of failure, but of power consumption and the high dissipation of heat required. A major breakthrough in electronic switch design in needed, one that is scalable to be compacted into chips, and not overheat. Possibly quantum, but that has serious stability issues, not to mention the cost. Photon switches are promising, if a bit heat problematic when compacted. Single Electron transistors might be the future, if they can get past a single lab model. I personally think that variable switches might be the future, IE: not just binary, but power ratios are not really there yet. but maybe.. Anastasi is always ere to bring us the exciting new developments in regards to this.

  • @snjsilvan
    @snjsilvan 2 місяці тому

    It is of course going to be interesting to see what really useful outcomes happen in the long run. Fingers crossed 🤞

  • @sergey9986
    @sergey9986 2 місяці тому +1

    Conclusions about flattening curve based on a few-months look-back can be a bit of a stretch.

  • @JamesPfeiffer-z2j
    @JamesPfeiffer-z2j 2 місяці тому

    Thanks Anastasi for your insights. I think we are quickly approaching the nexus between computer capability snd functionality. We need to enter the application phase more. It might be like finally getting the best chef stove made and that you always wanted. However you might only know how to make an omelet. Your application skills need to pickup speed to fully understand and appreciate the new stove. I have heard that some businesses are augmenting their management staff with AI. Good for them.

  • @dhightone6755
    @dhightone6755 2 місяці тому +1

    You mentioned at about 5:16 in this video that you will soon be starting an MBA program in Italy. Will you still be making these Informative videos while enrolled in this MBA program?

  • @ScienceRaven1138-du1mw
    @ScienceRaven1138-du1mw 2 місяці тому +1

    My programming productivity has increased by at least 20%. The progress is going very fast if we measure by the number of errors which keeps reducing by 20 percent every 6 months. We need sprite based animations, cubase style composing, AI synth architectures, and other tools, audio enhancer for 1920s and 1950s CDs, free video upscalers, psychologist and doctor AI, baby AI's learning with 5 senses, agriculture AI, chemistry AI, and they will come along.

  • @jonasprintzen9508
    @jonasprintzen9508 2 місяці тому

    Great, sober, presentation!
    I really look forward to the unexpected use of the tech behind all this, when the hype settles.
    When we get enough distance to the hype to pinpoint the reliable utilization the tech in combination with actual knowledge.
    It has happend many times befor. It will hapend again. But how? I don't believe in just feeding large semi-generic models more data.
    Quality and context is key ... and yes, directed effort to augment actual knowledge, as with core scientific research and solving well defined problems...

  • @GhostFS
    @GhostFS 2 місяці тому

    Nice Italian :D
    Always wondered how you peculiar voice would sound in Italian :D
    Also nice video thanks!

  • @thomasconroy4684
    @thomasconroy4684 2 місяці тому +1

    Thanks

  • @piccalillipit9211
    @piccalillipit9211 2 місяці тому +5

    *THE PROBLEM FOR AI IS THIS EQUATION* ∆AiP --> 0 as $ --> ∞
    The change in Ai performance tends towards zero as the cost tends towards infinity - the exact opposite of what the tech bro's claim - and its Cambridge University saying this, not me.

    • @cbrinsfi
      @cbrinsfi 2 місяці тому

      I wouldn't bet on it.....

    • @weakmindedidiot
      @weakmindedidiot 2 місяці тому

      You apparently don't understand what that function says. That function applies to anything you put money in to. The change in progress as you dump money in will always go to zero. You are chasing perfection. How much faster does a car get when you put the first 10k in? The second 10k? The third 10k? At what point are you pushing money to infinity for zero growth in car performance? Now you understand. Understand things before ya quote em friend.

  • @GianfrancescoFlorio
    @GianfrancescoFlorio 2 місяці тому

    Buona permanenza in Italia!! ♥️

  • @marcovillani4427
    @marcovillani4427 2 місяці тому

    Complimenti Anastasi benvenuta in Italia ti auguro un buon lavoro e magari un giorno i contenuti dei tuoi video li spiegherai in Italiano!!!

  • @kimchan382
    @kimchan382 2 місяці тому +1

    Habe nicht gewusst, dass du Deutsch sprichst. Wow!

  • @bay9876
    @bay9876 2 місяці тому +4

    When consciousness is fully understood then the roadway towards that huge breakthrough in AI technology happens. A lot of different scientific fields wil have to get onto this one.

  • @FabioChiodini-v5f
    @FabioChiodini-v5f 2 місяці тому

    Complimenti per il video e per la pronuncia.

  • @phvaessen
    @phvaessen 2 місяці тому

    "it's very hard to make predictions, especially about the future" - lol, I like that citation ! (it's atributed to Niels Bohr).
    Whenever someone predicts that a specific technology is reaching a plateau, I think of a mosquito. A mosquito has a very small brain, but it's able to fly, find food, keep you awake (...), avoid your hands that are trying to kill it, find a partner to reproduce with, and it's all autonomous. We're a long way from reaching Mother Nature's level of miniaturization and capability.

  • @pietervoogt
    @pietervoogt 2 місяці тому +1

    Italy is great, and the language is not too difficult. I'm sure you are going to have a good time.

  • @dimastorres8530
    @dimastorres8530 2 місяці тому

    I believe that the development of new peptides would be faster with alphafold. Simeglutide is already a blockbuster and there is plenty of space to explore with the help of IA

  • @timegates8521
    @timegates8521 2 місяці тому

    Ti seguo da tempo e da come pronunci “ciao” ero convinto che l’italiano tu già lo parlassi! 😄❤

  • @jamesdanforth9044
    @jamesdanforth9044 2 місяці тому

    the tech revolutions i have lived through include [transistors, PC's, networks, internet, mobile phones, smart phones, bigdata, social networks, digital music/photos, EV's, LLMs and soon AGI] 12 total. in Every Case, people underestimated the magnitude of LT change and underestimated LT net benefits. The same is true today. DK curves are just noise. What matters are the tech cost curves and the tech advancement curves, which are inevitably always similar for every new revolution.

  • @anthonyanglim7147
    @anthonyanglim7147 Місяць тому

    In my opinion, Generative A.I. Proved to be more disappointing, because while The Hype and Promise of what it would bring, in reality seemed exciting, As it turns out, People would Rather be the Creative Influences Themselves. And so with that A.I. at best becomes a 'Tool of Creativity' That only adds to society at a minimum. Mainly because allot of Artists and Creators would Rather use their Own Ideas or, at Best, Generative A.I. as a tool. It also has brought to Light the Ugly Truth about the Beginning of the A.I. Revolution, In that Deception on a Very Personal Level becomes dangerously Easy to produce and use to Manipulate the Masses, in the form of Deep Fakes and other A.I. Hoax Creations.
    I absolutely Love your Show, Awesome channel, Keep up the Good Work!

  • @zaferatasoy3095
    @zaferatasoy3095 2 місяці тому

    Öncelikle, Türkçe altyazı olması çok güzel..
    AI teknolojisi hakkında bilgi ve görüşlerinizi paylaştiginiz için teşekkür ederim :)

  • @glike2
    @glike2 2 місяці тому +1

    This really explains why Tesla is investing exponentially more and more into compute to get FSD to a level of autonomous L3, L4, and eventually L5.

  • @live--now
    @live--now 2 місяці тому

    my level of Italian is"ciao ragazzi 😁😁🤣🤣👍👍 , you are pretty good 👌👌👌👌👌👌 , every new tech usually has an initial hype bubble... , i dont think we are at the peak tho .. This even feels different... Like just starting... , 😃😃👌👌

  • @TheBann90
    @TheBann90 2 місяці тому

    We are already deep in that valley and have been for maybe a year!

  • @BrutusPalmeira
    @BrutusPalmeira 2 місяці тому

    Even if it gets stuck in the plateau for years, AI is a great source of knowledge producing a series of responses that could take days to compile.

  • @jonasschaeferjunior7274
    @jonasschaeferjunior7274 2 місяці тому

    Muito bom vídeo, muito sóbria a cerca do assunto.

  • @philipduttonlescorlett
    @philipduttonlescorlett 2 місяці тому +2

    I hate how 'markets' control progress. They don't always select the most useful and beneficial idea's but those that will make money. These are often not the best of idea's.

  • @crypto_que
    @crypto_que Місяць тому

    That's why we're moving past silicon into something much more thermally efficient.

  • @IamDuf
    @IamDuf 2 місяці тому +2

    pretty sure when a model goes off script and teaches itself an unsolicited language on it's own it most likely has achieved some level of AGI... it chose Persian if I recall correctly. curious what you think about that Anastasi? thank you for what you do I have learned so much from you love.

  • @Quickened1
    @Quickened1 4 дні тому

    I know it's probably a ways off yet, but the next time I go "wow!", at least from a technology advancement perspective, will be when I see either ai integrated into stand alone, bipedal robotic entities, or ai integrated into the human brain, and it works...🧐

  • @jasonwidegren3211
    @jasonwidegren3211 Місяць тому

    I laughed when I saw “Mt Stupid” on one of those graphs!

  • @guyung3106
    @guyung3106 2 місяці тому

    Ciao ragazza, congratulazioni e buona fortuna per il tuo MBA in Italia, your Italian sounds good, love your accent 🥰

  • @i2c_jason
    @i2c_jason 2 місяці тому

    There's a little more to this hype cycle graph, since the peak is when investment gets triggered, but then the engineers and designers need a few months to catch up and deliver their products. So I'd argue it's almost like mach diamonds on a rocket engine, with a standing wave that continues on a positive-slope line of some kind.

  • @kalisticmodiani2613
    @kalisticmodiani2613 2 місяці тому

    GPUs are ASICs, application specific integrated circuits, they are built to run 3D acceleration and later shaders on those 3d scenes. An example of "not an ASIC" is an FPGA that are built to be reconfigurable on the fly.

  • @Hashtag-Hashtagcucu
    @Hashtag-Hashtagcucu 2 місяці тому +1

    Whether drawing lines on a chart or hearing a poem from the sky, nothing will tell you what lies behind the corner of AI

  • @harmakhetgaming
    @harmakhetgaming 2 місяці тому

    curved like a foxtail is about the only thing I see in Ai...though it's cute how excited you get about machine learning

  • @gronkymug2590
    @gronkymug2590 2 місяці тому +2

    People still don't treat AI as a new industrial revolution. It's different than any normal tech.

  • @mr1enrollment
    @mr1enrollment 2 місяці тому

    yup, we don't know what will happen until it happens.
    corollary: what is is, what is not is not, what will be will be.
    be patient

  • @simoneromeo5998
    @simoneromeo5998 Місяць тому

    Your Italian sounds great. I'd work more on your hand gestures ;)

  • @matthimf
    @matthimf 2 місяці тому

    You have to be precise with the interpretation of diagrams. In scores from 0 to 100, going from 80 to 90 percent is doubling the performance of the model!

  • @YYGermaine
    @YYGermaine 2 місяці тому +1

    Can you please share your views about the future of Electronics and communication engineering. I'm joining this course soon. Pls pls pls. I'm very worried.

  • @HAL9000.
    @HAL9000. Місяць тому

    What's crazy is with o1, some of the opinions descibed in the video are out of date already. Fun times.

  • @prozacgod
    @prozacgod 2 місяці тому

    What we actually need is an FPGA that has extremely advanced numerical processing primitives built in, so we can adapt the tooling this would probably sacrifice some speed for future nimbleness to adapt. OH and power, FPGA's IIRC are quite a bit more power hungry, right?

  • @thomasruhm1677
    @thomasruhm1677 2 місяці тому

    Currently I am learning Occidental, Italian, and Vallader. Possiamo imparare insieme.

  • @Melkanea
    @Melkanea 2 місяці тому

    general vs specific is.
    you can upgrade till capacity potential is reached vs once build its all you will ever get.
    combination of both like always.
    LLMs sind nur die bibliotheken.

  • @michalp1
    @michalp1 2 місяці тому

    Groq company is interesting with their LPU (Language Processing Units) which runs very low power once GPU trains a model, then you switch to use of the model to LPU to run it. Thats the future. And yes Gartner hype cycle is usually true, but AI will change everything permanently - the cycle shows it.