AI Hardware, Explained.

Поділитися
Вставка
  • Опубліковано 26 лис 2024

КОМЕНТАРІ • 42

  • @a16z
    @a16z  Рік тому +4

    For a sneak peek into part 2 and 3, they're already live on our podcast feed! Animated explainers coming soon.
    a16z.simplecast.com/

    • @cmichael981
      @cmichael981 11 місяців тому

      doesn't look like part 2/3 are up on the podcast feed (anymore at least) - any chance those video explainers are coming out still?

  • @jack_fischer
    @jack_fischer Рік тому +8

    The music is very distracting. Please tone down in the future

  • @a16z
    @a16z  Рік тому +6

    Timestamps:
    00:00 - AI terminology and technology
    03:54 - Chips, semiconductors, servers, and compute
    05:07 - CPUs vs GPUs
    06:16 - Future architecture and performance
    07:12 -The hardware ecosystem
    09:20 - Software optimizations
    11:45 -What do we expect for the future?
    14:25 - Sneak peek into the series

  • @Inclinant
    @Inclinant 9 місяців тому

    In the usual case of floating-point numbers being represented at 32-bit, is this why quantization for LLM models can be so much smaller at around 4-bit for ExLlama and making it so much easier to fit models inside the lower amounts of VRAM that consumer GPUs have?
    Incredible video, interviewer ask really though provoking and relevant questions while the interviewee is extremely knowledgeable as well. It's broken down so well too!
    Also, extremely grateful to a16z for supporting the The Bloke's work in LLM quantization! High quality quantization and simplified instructions makes LLMs so much easier to use for the average joe.
    Thanks for creating this video.

    • @msclrhd
      @msclrhd 6 місяців тому

      It's a trade-off between accuracy and space/performance (i.e. being able to fit the model on local hardware). A 1-bit number could represent (0, 1) or (0, 0.5) as it only has 2 values. With 2 bits you can store 4 values, so you could represent (0, 1, 2, 3), signed values (-2, -1, 0, 1), float between 0 and 1 (0, 0.25, 0.50, 0.75), etc. depending on the representation. The more bits you have the better the range (minimum, maximum) of values you can store, and the precision (gap or distance) between each value.
      Ideally you want enough bits to keep the weights of the model as close to their trained values so you don't significantly alter the behaviour of the network. Generally a quantization of 6-8 offers comparable accuracy (perplexity score) with the original, and below that you get an exponential degredation in accuracy, with below 4-bits being far worse.

  • @NarsingRaoschoolknot
    @NarsingRaoschoolknot 8 місяців тому +1

    Well done, very clean and clear. Love your simplicity

  • @Matrix1Gamer
    @Matrix1Gamer 9 місяців тому

    Guido Appenzeller is speaking my language. the lithography of chips are shrinking while consuming lots of power. Parallel computing is definitely going to be widely adopted going forward. Risc-V might replace x86 architecture.

  • @lnebres
    @lnebres Рік тому +1

    An excellent primer for beginners in the field.

  • @AlexHirschMusic
    @AlexHirschMusic 10 місяців тому +2

    This is highly informative and easy to understand. As an idiot, I really appreciate that a lot.

  • @TINTUHD
    @TINTUHD Рік тому +2

    Great video. Tip of the computation innovation

  • @AnthatiKhasim-i1e
    @AnthatiKhasim-i1e 3 місяці тому

    "To remain competitive, large companies must integrate AI into their supply chain management, optimizing logistics, reducing costs, and minimizing waste."

  • @lerwenliu9263
    @lerwenliu9263 9 місяців тому

    Love this Channel! Could we also look at the hunger for energy consumption and the impact for climate change?

  • @IAMNOTRANA
    @IAMNOTRANA Рік тому +3

    No wonder nvidia don't care about consumer GPU anymore.

  • @kymtoobe
    @kymtoobe 5 місяців тому +1

    This is a good video.

  • @nvr1618
    @nvr1618 Рік тому

    Excellent video. Thank you and well done

  • @Doggieluv25
    @Doggieluv25 Рік тому +1

    Really helpful thank you!

  • @adithyan_ai
    @adithyan_ai Рік тому

    Incredibly useful!! Thanks.

  • @dinoscheidt
    @dinoscheidt Рік тому

    1:24 Ehm… I would like to know, what camera and lens/focal length you use to match the boom arm and background bokeh so perfectly 🤐

    • @StephSmithio
      @StephSmithio Рік тому +3

      I use the Sony a7iv camera with a Sony FE 35mm F1.4 lens! I should note that good lighting and painting the background dark does wonders though too

  • @MegaVin99
    @MegaVin99 11 місяців тому +1

    Thanks for video but 4 mins before getting to any details in a 15 min video?

  • @stachowi
    @stachowi Рік тому

    This was very good

  • @billp37abq
    @billp37abq 3 місяці тому

    This video makes clear WHY DSP [digital signal processing] chips were implementing sum{a[i]*b[i]} in hardware!

  • @vai47
    @vai47 Рік тому

    Older Vox style animations FTW!

  • @chenellson489
    @chenellson489 Рік тому

    See you at NY Tech Week

  • @SynthoidSounds
    @SynthoidSounds Рік тому

    A slightly different way of looking at Moore's Law is not about being "dead", but rather becoming irrelevant. Quantum computing operates very differently than binary digital computation, it's irrelevant to compare these two separate domains in terms of "how many transistors" can fit into a 2D region of space, or a FOPS performance. Aside from extreme parallelism available in QC, the next stage from "here" is in optical computing, utilizing photons instead of electrons as the computational mechanism. Also, scalable analog computing ICs (for AI engines) are being developed (IBM for example) . . . Moore's Law isn't relevant in any of these.

  • @LeveragedFinance
    @LeveragedFinance Рік тому

    Great job

  • @thirukaruna7469
    @thirukaruna7469 Рік тому

    Good one, Thx.!

  • @joshuatruong2001
    @joshuatruong2001 Рік тому +1

    The Render network token solves this

  • @shwiftymemelord261
    @shwiftymemelord261 4 місяці тому

    it would be so cool if this main speaker was a clone

  • @LeveragedFinance
    @LeveragedFinance Рік тому +2

    Huang's law

  • @billp37abq
    @billp37abq 3 місяці тому

    AI and cloud computing face power supply issue as cryptocurrencies?
    "Cryptocurrency mining, mostly for Bitcoin, draws up to 2,600 megawatts
    from the regional power grid-about the same as the city of Austin."

  • @gracekim2863
    @gracekim2863 Рік тому

    Back to School Giveaway

  • @RambleStorm
    @RambleStorm Місяць тому

    Geforce 256 aka GeForce 1 wasn't even Nvidia's first gpu let alone the first ever PC gpu... 😅😂

  • @antt8550
    @antt8550 Рік тому

    The future

  • @billp37abq
    @billp37abq 3 місяці тому

    AI power consumption has doomed it to failure before it has started?
    ua-cam.com/video/lRy5Sy9Elbw/v-deo.html

  • @mr.wrongthink.1325
    @mr.wrongthink.1325 Місяць тому

    The music is unnecessary and actually annoying.