Is the Future of Linear Algebra.. Random?

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 528

  • @charilaosmylonas5046
    @charilaosmylonas5046 7 місяців тому +405

    Great video! I want to add a couple of references to what you mentioned in the video related to neural networks:
    1. Ali Rahimi got the Neurips 2017 "test of time" award for a method called - Random kitchen sinks (kernel method with random features).
    2. Choromansky (from Google) made a variation of this idea to alleviate the quadratic memory cost of self-attention in transformers (which also works like a charm - I tried it myself, and I'm still perplexed how it didn't become one of the main efficiency improvements for transformers.). Check "retrinking attention with performers".
    Thank you for the great work on the video - keep them coming please! :)

    • @howuhh8960
      @howuhh8960 7 місяців тому +16

      it didn't because all efficient variations have significantly worse performance on retrieval tasks (associative recall for example), as all recent papers demonstrated

    • @Arithryka
      @Arithryka 7 місяців тому +4

      The Quadratic Memory Cost of Self-Attention in Transformers is my new band name

    • @theo1103
      @theo1103 6 місяців тому

      Is this a similar idea compared with the latent space in the transformer?

    • @hyperplano
      @hyperplano 6 місяців тому +2

      Rahimi got the award for the "Random Features for Large-Scale Kernel Machines" paper, not the random kitchen sinks one

    • @rileyjohnmurray7568
      @rileyjohnmurray7568 6 місяців тому +1

      @@howuhh8960 do you have specific references for this claim? I'm not doubting you, I'm just really interested in learning more, and the literature is vast.

  • @BJ52091
    @BJ52091 7 місяців тому +741

    As a mathematician specializing in probability and random processes, I approve this message. N thumbs up where N ranges between 2.01 and 1.99 with 99% confidence!

    • @Mutual_Information
      @Mutual_Information  7 місяців тому +54

      Great to have you here!

    • @purungo
      @purungo 7 місяців тому +75

      So you're saying there's a 1 chance in roughly 10^16300 that you're giving him 3 thumbs up...

    • @frankjohnson123
      @frankjohnson123 7 місяців тому +12

      My brother in Christ, use a discrete probability distribution.

    • @nile6076
      @nile6076 7 місяців тому +25

      Only if you assume a normal distribution! ​@@purungo

    • @sylv512
      @sylv512 7 місяців тому +2

      Is this just one big late april fool's? What the hell

  • @octavianova1300
    @octavianova1300 7 місяців тому +1234

    reminds me of that episode of veggie tales when larry was like "in the future, linear algebra will be randomly generated!"

    • @NoNameAtAll2
      @NoNameAtAll2 7 місяців тому +82

      W E E D E A T E R

    • @rileymurray7437
      @rileymurray7437 7 місяців тому +22

      Reminds you of what???

    • @jedediahjehoshaphat
      @jedediahjehoshaphat 7 місяців тому +8

      xD

    • @vyrsh0
      @vyrsh0 7 місяців тому +9

      I thought it would be some nice science show, but it turns out to be some kids show : (

    • @notsojharedtroll23
      @notsojharedtroll23 7 місяців тому

      ​@@rileymurray7437 he means this video: ua-cam.com/video/j4Ph02gzqmY/v-deo.htmlsi=wb2atwfoSQaefrjL

  • @laurenwrubleski7204
    @laurenwrubleski7204 7 місяців тому +456

    As a developer at AMD I feel somewhat obligated to note we have an equivalent to cuBLAS called rocBLAS, as well as an interface layer hipBLAS designed to compile code to make use of either AMD or NVIDIA GPUs.

    • @sucim
      @sucim 7 місяців тому +46

      but can your cards train imagenet without crashing?

    • @389martijn
      @389martijn 7 місяців тому +30

      ​@@sucimsheeeeeeeeesh

    • @johnisdoe
      @johnisdoe 7 місяців тому +1

      Are you guys hiring?

    • @Zoragna
      @Zoragna 7 місяців тому +1

      OP forgot about BLAS being a standard so most implementations have been forgotten, it's weird to point at Nvidia

    • @cannaroe1213
      @cannaroe1213 7 місяців тому +21

      As an AMD customer who recently bought a 6950XT for €600, I am disappointed to learn rocBLAS is not supported on my outdated 2 year old hardware.

  • @TimL_
    @TimL_ 7 місяців тому +164

    The part about matrix multiplication reminded me of studying cache hit and miss patterns in university. Interesting video.

    • @piedepew
      @piedepew 3 місяці тому +1

      Dynamic programming question

  • @charlesloeffler333
    @charlesloeffler333 7 місяців тому +97

    Another tidbit about LinPack: One of its major strengths at the time it was written was that all of its double precision algorithms were truly double precision. At that time other packages often had double precision calculations hidden within the single precision routines where as their double precision counter parts did not have quad-precision parts anywhere inside. The LinPack folks were extraordinarily concerned about numerical precision in all routines. It was a great package.
    It also provided the basis for Matlab

    • @ILoveTinfoilHats
      @ILoveTinfoilHats 3 місяці тому +1

      And it's so good at using CPU resources as optimally as possible that Intel used it for stress and stability testing their CPUs for years (and still do to some degree AFAIK)

  • @scottmiller2591
    @scottmiller2591 7 місяців тому +110

    Brunton, Kutz et al. in the paper you mentioned here "Randomized Matrix Decompositions using R," recommended in their paper using Nathan Halko's algo, developed at the CU Math department. B&K give some timing data, but the time and memory complexity were already computed by Halko, and he had implemented it in MATLAB for his paper - B&K ported it to R. Halko's paper from 2009 "FINDING STRUCTURE WITH RANDOMNESS: STOCHASTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS" laid this all out 7 years before the first draft of the B&K paper you referenced. Halko's office was a mile down the road from me at that time, and I implemented Python and R code based on his work (it was used in medical products, and my employer didn't let us publish). It does work quite well.

    • @Mutual_Information
      @Mutual_Information  7 місяців тому +21

      Very cool! The more I researched this, the more I realized the subject was deeper (older too) than I had realized with the first few papers I read. It's interest to hear your on-the-ground experience of it, and I'm glad the video got your attention.

    • @ajarivas72
      @ajarivas72 7 місяців тому +1

      @@Mutual_Information
      Has anyone tried genetic algorithms instead of purely random approches?
      In my experience, genetic algorithms are 100 faster than Monte Carlo simulations to obtain an optimum.

    • @skn123
      @skn123 6 місяців тому +1

      Halko's algorithm helped me start my understanding of Laplacian eigenmaps and other dimensionality reduction methods.

  • @richardyim8914
    @richardyim8914 7 місяців тому +34

    Golub and Van Loan’s textbook is goated. I loved studying and learning numerical linear algebra for the first time in undergrad.

  • @danielsantiagoaguilatorres9973
    @danielsantiagoaguilatorres9973 7 місяців тому +47

    I'm writing a paper on a related topic. Didn't know about many of these papers, thanks for sharing! I really enjoyed your video

  • @makapaka8247
    @makapaka8247 7 місяців тому +78

    I'm finally far enough in education to see how well made your stuff is. Super excited to see a new one from you. Thanks for expanding people's horizons!

  • @pietheijn-vo1gt
    @pietheijn-vo1gt 7 місяців тому +59

    I have seen a very similar idea in compressed sensing. In compressed sensing we also use a randomized sampling matrix, because the errors can be considered as white noise. We can then use a denoising algorithm to recover the original data. In fact I know Philips MRI machines use this technique to speed up scans, because you have to take less pictures. Fascinating

    • @tamineabderrahmane248
      @tamineabderrahmane248 7 місяців тому

      random sampling to reconstruct the signal

    • @pietheijn-vo1gt
      @pietheijn-vo1gt 7 місяців тому

      @@tamineabderrahmane248... what?

    • @MrLonelyrager
      @MrLonelyrager 7 місяців тому +5

      Compressed sensing is also useful for wireless comunications. I studied its usage for sampling ultra wideband signals and indoor positioning. It only works accurately under certain sparsity assumptions. In MRI scans , their "fourier transform" can be considered sparse, then we can use l1 denoising algorithms to recover the original signal.

    • @pietheijn-vo1gt
      @pietheijn-vo1gt 7 місяців тому +1

      @@MrLonelyrager yes correct, that's exactly what I used. In the form of ISTA (iterative shrinkage and thresholding) algorithms and its many (deep-learning) derivatives

  • @noahgsolomon
    @noahgsolomon 7 місяців тому +8

    You discussed all the priors incredibly well. I didn’t even understand the premise of random in this context and now I leave with a lot more.
    Keep it up man ur videos are the bomb

  • @deltaranged
    @deltaranged 7 місяців тому +29

    It feels like this video was made to match my exact interests LOL
    I've been interested in NLA for a while now, and I've recently studied more "traditional" randomized algorithms in uni for combinatorial tasks (e.g. Karger's Min-cut). It's interesting to see how they've recently made ways to combine the 2 paradigms. I'm excited to see where this field goes. Thanks for the video and for introducing me to the topic!

    • @Rockyzach88
      @Rockyzach88 7 місяців тому +1

      UA-cam has you in its palms. _laughs maniacally_

    • @Sino12
      @Sino12 7 місяців тому

      where do you study?

  • @mgostIH
    @mgostIH 7 місяців тому +8

    I started reading this paper when you mentioned it on Twitter, forgot it was you who I got it from and was now so happy to see a video about it!

  • @pr0crastinatr
    @pr0crastinatr 7 місяців тому +6

    Another neat explanation for why the randomized least-squares problem works is the Johnson-Lindenstrauss lemma. That lemma states that most vectors don't change length a lot when you multiply them by a random gaussian matrix, so the norm of S(Ax - b) is within (1-eps) to (1+eps) of the norm of Ax-b with high probability.

  • @grewech
    @grewech 3 місяці тому +4

    Was looking for a nerdy video to fall asleep to, but couldn’t take my eyes off the screen. Excellent presentation and very well done video!

  • @charlesity
    @charlesity 7 місяців тому +9

    As always this is BRILLIANT. I started following your videos since I saw the GP regression video. Great content! Thank you very much.

  • @marcegger7411
    @marcegger7411 7 місяців тому +7

    Damn... your videos are getting beyond excellent!

  • @bluearctik3980
    @bluearctik3980 7 місяців тому +4

    My first thought was "this is like journal club with DJ"! Great stuff - well researched and crisply delivered. More of this, if you please.

  • @aleksszukovskis2074
    @aleksszukovskis2074 7 місяців тому +6

    its always a pleasure to watch this channel

  • @zyansheep
    @zyansheep 7 місяців тому +19

    Dang, I absolutely love videos and articles that summarize the latest in a field of research and explain the concepts well!

  • @ernestoherreralegorreta137
    @ernestoherreralegorreta137 7 місяців тому +6

    Amazing explanation of a complex topic! You've got yourself a new subscriber.

  • @AntonTkachuk-s1s
    @AntonTkachuk-s1s 6 місяців тому +3

    I used one time random matrices for eigenvalue counts on intervals and it was amazing!
    Di Napoli, E., Polizzi, E., & Saad, Y. (2016). Efficient estimation of eigenvalue counts in an interval. Numerical Linear Algebra with Applications, 23(4), 674-692.

  • @firefoxmetzger9063
    @firefoxmetzger9063 6 місяців тому +11

    I realize that YT comments are not the best place to explain "complex" ideas, but here it goes anyway:
    The head bending relative difference piece reply is "just" a coordinate transformation. At 29:45, you lay ellipses atop each other and show the absolute approximation difference between the full sample and the sketch. The "trick" is to realize that this happens in the common (base) coordinate system and that nothing stops you from changing this coordinate system. For example, you can (a) move the origin to the centroid of the sketch, (b) rotate so that X and Y align with the semi-axis of the sketch, and (c) scale (asymmetrically) so that the sketches semi-axis have length 1. What happens to the ellipsoid of the full sample in this "sketch space"?
    Two things happen when plotting in the new coordinate system: (1) the ellipsoid of the sketch becomes a circle around the origin (semi-axes are all 1) by construction. (2) the ellipsoid of the full sample becomes an "almost" circle depending on the quality of the approximation of the full sample by the sketch. As sample size increases, centroids converges, semi-axes start aligning, and (importantly) semi-axes get stretched/squashed until they reach length 1. Again, this is for the full sample - the sketch is already "a perfect circle by construction". In other words, as we increase the sample size of the sketch the full sample looks more and more like a unit circle in "sketch space".
    We can now quantify the quality of the approximation using the ratio of the full sample's semi-axis in "sketch space". If there are no relative errors (perfect approximation), these become the ratio of radii of a circle which is always 1. Any other number is due to (relative) approximation error, lower is better, and it can't be less than 1. The claim now is that, even for small samples, this ratio is already low enough for practical use, i.e., sketches with just 10 points already yield good results.

    • @firefoxmetzger9063
      @firefoxmetzger9063 6 місяців тому +6

      If you understand the above, then the high-dimensional part becomes clear as well: In N dimensions a "hyper-ellipsoid" has N semi-axes, and the claim is that for real (aka. sparse) problems some of these semi-axes are really large and some are really small when measured in "problem space". This relationship applied to the 2D ellipsis you show at 29:45 means that the primary axis becomes really large (stretches beyond the screen size) and the secondary axis becomes really small (squished until the border lines touch each other due to line thickness). This will make the ellipsis plot "degenerate" and it will look like a line - which is boring to visualize.

    • @technoguyx
      @technoguyx 3 місяці тому +1

      Thanks for taking the time to type this, it's clear(er) to me now.

  • @scottmiller2591
    @scottmiller2591 7 місяців тому +13

    This was a nice walk down memory lane for me, and a good introduction to the beginner. It's nice to see SWE getting interested in these techniques, which have a very long history (like solving finite elements with diffusion decades ago, and compressed sensing). I enjoyed your video.
    A few notes:
    It's useful to note that "random" projections started out as Gaussian, but it turns out very simple, in-memory, transformations let you use binary random numbers at high speed with little to no loss of accuracy. I think you had this in mind when talking about the random matrix S in sketch-and-solve.
    BLAS sounds like blast, but without the t. I'm sure there's people who pronounce it like blahs. Software engineers mangle the pronunciation of everything, including other SWE packages, looking at you, Ubuntu users. However the first pronunciation is the pronunciation I have always heard in the applied linear algebra field.
    FORTRAN doesn't end like "fortune," but rather ends with "tran," but maybe people pronounce "fortran" (uncapitalized) that way these days - IDK (see note above re: mangling; FORTRAN has been decapitalized since I started working with it).
    Cholesky starts with a hard "K" sound, which is the only pronunciation you'll ever hear in NLA and linear algebra. It certainly is the way Cholesky pronounced it.
    Me, I always pronounce Numpy to sound like lumpy just to tweak people, even though I know better ☺. I've always pronounced CQRRPT as "corrupt," too, but because that's what the acronym looks like (my eyes are bad).
    One way to explain how these work intuitively is to look at a PCA, similar to what you touched on with the illustration of covariance. If you know the rank is low, then there will be, say, k large PCA directions, and the rest will be small. If you perform random projection on the data, those large directions will almost certainly show up in your projections, with the remaining PCA directions certainly being no bigger than they were originally (projection is always non-expanding). This means the random projections will still contain large components of the strong PCA directions, and you only need to make sure you took enough random projections to avoid being unlucky enough to accidentally be very nearly normal with the strong PCA directions every time. The odds of you being unlucky go down with every random projection you add. You'd have to be very unlucky to take a photo of a stick from random directions, and have every photo of the stick be taken end-on. In most photos, it will look like a stick, not a point. Similarly, taking a photo of a piece of paper from random directions will look like a distorted rectangle, not a line segment It's one case where the curse of dimensionality is actually working in your favor - several random projections almost guarantees they won't all be projections to an object that's the thickness of the paper.
    I've been writing randomized algos for a long time (I have had arguments w engineers about how random SVD couldn't possibly work!), and love seeing random linear algebra libraries that are open and unit tested.
    I agree with your summary - a good algorithm is worth far more than good hardware. Looking forward to you tracking new developments in the future.

    • @Mutual_Information
      @Mutual_Information  7 місяців тому +6

      This is the real test of a video. When an expert watches it and, with some small corrections, agrees that it gets the bulk of the message right. It's a reason I try to roll in an subject matter expert where I can. So I'm quite happy to have covered the topic appropriately in your view. (It's also a relief!)
      And I also wish I had thought of the analogy: "You'd have to be very unlucky to take a photo of a stick from random directions, and have every photo of the stick be taken end-on. In most photos, it will look like a stick, not a point." I would have included that if I had thought of it!

    • @scottmiller2591
      @scottmiller2591 7 місяців тому

      @@Mutual_Information Agree absolutely!

    • @rileyjohnmurray7568
      @rileyjohnmurray7568 7 місяців тому +3

      Jim Demmel and Jack Dongarra pronounced it "blahs" the last time I spoke with each of them. (~This morning and one month ago, respectively.) 😉

    • @Mutual_Information
      @Mutual_Information  7 місяців тому +1

      @@rileyjohnmurray7568 lol

    • @scottmiller2591
      @scottmiller2591 7 місяців тому +1

      @@rileyjohnmurray7568 I hope they perk up ☺

  • @hozaifas4811
    @hozaifas4811 7 місяців тому +23

    We need more content creators like you ❤

    • @Mutual_Information
      @Mutual_Information  7 місяців тому +4

      Thank you. These videos take awhile, so I wish I could upload more. But I'm confident I'll be doing UA-cam for a long time.

    • @hozaifas4811
      @hozaifas4811 7 місяців тому +2

      @@Mutual_Information Well ,This news made my day !

  • @Dagobah359
    @Dagobah359 7 місяців тому +38

    3:03 Linear algebra professor, here. Please stop teaching that it's the rows of matrices which are vectors. Yes, both rows and columns of matrices correspond to vectors in separate vector spaces, but when they don't have the full picture yet, beginning students should be thinking of the columns of the matrix as 'the' vectors. I've had to spend so much work fixing the perspective of engineers in their masters program who only think of the rows as vectors. It's much easier to broaden a student's perspective from columns to also rows, than it is to broaden their perspective from rows to also columns.

    • @rileyjohnmurray7568
      @rileyjohnmurray7568 6 місяців тому +7

      Thanks for sharing this perspective! I've heard something similar from a professor when I did my PhD, and I generally agree with it.
      That said, I think introducing row-wise is not so bad *in the specific context of this video.* It seems like the natural thing to do if we want to compare scalar-valued nonlinear functions to scalar-valued linear functions. So if you're in a time crunch and you need to explain the concept of linearity in one minute (and with few equations), then this approach seems not so bad.

    • @glad_asg
      @glad_asg 5 місяців тому

      idk, seems like skill issue.

  • @JoeBurnett
    @JoeBurnett 7 місяців тому +2

    You are an amazing teacher! Thank you for explaining the topic in this manner. It really motivates me to continue learning about all things linear algebra!

  • @wiktorzdrojewski890
    @wiktorzdrojewski890 7 місяців тому +2

    this feels like a good presentation topic for numerical methods seminar

  • @lbgstzockt8493
    @lbgstzockt8493 7 місяців тому +5

    Very good video on a very interesting topic. Who would have thought that there is this much to gain in such a commonly used piece of mathematics.

  • @Apophlegmatis
    @Apophlegmatis 6 місяців тому +10

    The nice thing is, with continuous systems (and everything in experienced life is continuous) the question is not "is it linear," but "on what scale is it functionally linear," which makes calculations of highly complex situations much simpler.

    • @Mutual_Information
      @Mutual_Information  6 місяців тому

      YES!

    • @woosix7735
      @woosix7735 5 місяців тому +2

      what about the Weierstrass function that isn't linear on any scale?

    • @Apophlegmatis
      @Apophlegmatis 5 місяців тому

      That is an excellent example as to why engineering uses approximations - we can closely model on a given scale any system using known functions.
      I did not know about that function before though, that's super interesting!

    • @pavlopanasiuk7297
      @pavlopanasiuk7297 Місяць тому

      ​@@woosix7735 that isn't practical so far. What is practical though is phase transition evaluation, there you cannot approximate linearly

  • @GeorgeDole
    @GeorgeDole Місяць тому +1

    Bravo! ! As an LA math teacher and Linear Algebra student in college, you confirmed why children need to learn Algebra-1 with the infamous Quadratic formula to understand how Linear Algebra works and is necessary to understand A.I. Please do more Linear Algebra videos for high school and college students and other interested lay people. Many Thanks.

  • @Otakutaru
    @Otakutaru 7 місяців тому +1

    Adequate density of new information, and sublime narrative. Also, on point visuals

  • @braineaterzombie3981
    @braineaterzombie3981 7 місяців тому +1

    This is exactly what i needed. Subscribed

  • @Duviann
    @Duviann 7 місяців тому +5

    the quality on this editing is top notch, congratulations!!!

  • @AjaniTea
    @AjaniTea 7 місяців тому +1

    This is a world class video. Thanks for posting this and keep it up!

  • @chazhovnanian6897
    @chazhovnanian6897 5 місяців тому

    you've GOT to post more, this stuff is amazing, im still in high school but learning about so-called 'mature' processes which become completely revolutionised really inspires me, thanks for this :)

  • @aminelahlou1272
    @aminelahlou1272 7 місяців тому +2

    As hobbyist mathematician, please, don’t say that f(x) is a function or worse : a linear function.
    f(x) is a number in most cases you described. f on the other hand is a function and f can be linear

    • @aminelahlou1272
      @aminelahlou1272 7 місяців тому +1

      f(x) can be a matrix or even a function (that is what we call in computer programming higher order function) but I don’t think that was the intended message

  • @yeetmaster8050
    @yeetmaster8050 28 днів тому +1

    As a published computer scientist, your videos are awesome. Appreciate the honesty that you caveat details on topics you're not 100% sure on

    • @Mutual_Information
      @Mutual_Information  27 днів тому

      Getting the experts to approve is the standard I aim for, so it means a lot to hear this from you - thank you!

  • @jondor654
    @jondor654 7 місяців тому +2

    Lovely type, great clarity .

  • @mohamedalmahditantaoui8422
    @mohamedalmahditantaoui8422 5 місяців тому

    I think you made the best numerical linear algebra in the world, we really need more content like this. Keep up the good work.

  • @Francis-gg4rn
    @Francis-gg4rn 4 місяці тому

    this channel is GOLD, please keep it up we love you

  • @jliby1708
    @jliby1708 4 місяці тому +2

    Masterful explaination, going through the math and providing high-level abstractions of concepts. Really helps seeing how someone could invent a major discovery.thanks

  • @StratosFair
    @StratosFair 7 місяців тому

    As a grad student in theoretical machine learning, I have to say i'm blown away by the quality of your content, please keep videos like these coming !

  • @oceannuclear
    @oceannuclear 7 місяців тому

    Oh my god, this forms a small part of my PhD thesis where I've been trying to understand LAPACK's advantage/disadvantage when it comes to inverting matrices. Having this video really helps me put things into contex! Thank you very much for making this!

  • @damondanieli
    @damondanieli 7 місяців тому +7

    Great video! One thing: “processor registers” not “registries”

  • @the_master_of_cramp
    @the_master_of_cramp 7 місяців тому +2

    Great and clear video!
    Makes me wanna study more numerical LA...combined with probability theory
    because it shows how likely inefficient many algorithms use currently are, and that randomized algorithms are usually insanely much faster, while being approximately correct.
    So those randomized algorithms basically can be used anywhere when we don't need to be 100% sure about the result (which is basically always, because our mathematical models are only approximations of what's going on in the world and thus are inaccurate anyways and as you mentioned, if data is used, it's noisy).

  • @robharwood3538
    @robharwood3538 7 місяців тому

    Even just the history section of this video is *incredibly* valuable, IMHO. Thank you so much!

  • @bn8ws
    @bn8ws 7 місяців тому +2

    Outstanding content, instant sub. Keep up the good work!

  • @nikita_x44
    @nikita_x44 7 місяців тому +9

    linearity @ 4:43 is diffirent linearity. linear functions in the sense of linear algebra must always pass through (0,0)

    • @sufyanali3992
      @sufyanali3992 7 місяців тому +1

      I thought so too, the 2D line shown on the right is an affine function, not a linear function in the rigorous sense.

    • @KepleroGT
      @KepleroGT 7 місяців тому +1

      Yep, otherwise the linearity of addition and multiplication which he just skipped over wouldn't apply and thus wouldn't be linear functions, or rather the correct term is linear map/transformation. Example: F(x,y,z) = (2x+y, 3y, z+5), (0,0,0) = F(0,0,0) is incorrect because F(0,0,0) = (0,0,5). The intent is to preserve the linearity of these operations so they can be applied similarly. If I want 2+2 or 2*2 I can't have 5

  • @AlexGarel-xr9ri
    @AlexGarel-xr9ri 7 місяців тому

    Incredible video with very good animations and script. Thank you !

  • @moisesbessalle
    @moisesbessalle 7 місяців тому +6

    Amazing video!

  • @user-vr3ex1fj7c
    @user-vr3ex1fj7c Місяць тому

    Being a comp sci undergrad, i felt like this vid was exceptionally explained. Really gives me interest in diving deeper in the topic. Thanks!

  • @h.b.1285
    @h.b.1285 7 місяців тому +1

    Excellent video! This topic is not easy for the layperson (admittedly, the layperson that likes Linear Algebra), but it was clearly and very well structured.

  • @DawnOfTheComputer
    @DawnOfTheComputer 7 місяців тому +1

    The math presentation and explanation alone was worth a sub, let alone the interesting topic.

  • @gaussology
    @gaussology 7 місяців тому

    Wow, so much research went into this! It makes me even more motivated to read papers and produce videos 😀

  • @ihatephysixs
    @ihatephysixs 7 місяців тому +2

    Awesome video

  • @piyushkumbhare5969
    @piyushkumbhare5969 7 місяців тому +1

    This is a really well made video, nice!

  • @Stephen_Kelley
    @Stephen_Kelley 7 місяців тому +1

    Excellent video, really well paced.

  • @mohammadalaaelghamry8010
    @mohammadalaaelghamry8010 5 місяців тому +2

    Great video, very useful and very well presented. Thank you.

  • @tiwiatg2186
    @tiwiatg2186 7 місяців тому +1

    Loving it loving it loving it!! Amazing video, amazing topic 👏

  • @billbez7465
    @billbez7465 7 місяців тому +1

    Amazing video with great presentation. Thank you

  • @mohammedbelgoumri
    @mohammedbelgoumri 7 місяців тому +4

    No better way to start the day than with an MI upload 🥳

  • @iamr0b0tx
    @iamr0b0tx 7 місяців тому +4

    This is a really good video 💯

  • @from_my_desk
    @from_my_desk 7 місяців тому +1

    thanks a ton! this was eye-opening 😊

  • @razeo7068
    @razeo7068 5 місяців тому

    Amazing video. Had me hooked from start to finish. You gained a new subscriber

  • @baptiste-genest
    @baptiste-genest 7 місяців тому +4

    Great video ! I had a compressive sensing class this semester, it sure is a very interesting and promissing topic of reasearch !
    But I'm not sure that video games would benefit a lot from it ? If I understood correctly, the gains are mostly at high dimension, while video games linear algebra is basically only 3D, do you have exemples ? Thanks again !

    • @Mutual_Information
      @Mutual_Information  7 місяців тому +3

      Thank you! My take is that that’s only in a certain representation. E.g. when a dimension refers to a specific pixel, the dimensions are quite high.

  • @DocM221
    @DocM221 7 місяців тому +2

    I've been through some basic linear algebra courses, but really the covariance problem struck me as one obviousness to a statician. A statician would never go and sample everybody, they would first determine how accurate they needed to be in their certainty, and then go about sampling exactly the number of people that satisfies that equation. I actually had to do this in my job! I can totally see how this will be a great tool used with data prediction and maybe hardware accelerators to make MASSIVE gains. We are in for a huge wild ride! Thanks for the video!

  • @Alexander-pk1tu
    @Alexander-pk1tu Місяць тому

    you are very talented. I like your videos a lot. Keep up the hard work man!

  • @antiguarocks
    @antiguarocks 6 місяців тому +1

    Reminds me of what my high school maths teacher said about being able to assess product quality on a production line with high accuracy by only sampling a few percent of the product items.

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk 7 місяців тому +5

    Great video brother! 😍

    • @Mutual_Information
      @Mutual_Information  7 місяців тому

      Thank you MLST! You're among a rare bunch providing non-hyped or otherwise crazy takes on AI/ML, so it means a lot coming from you.

  • @KipIngram
    @KipIngram 7 місяців тому +6

    Fascinating. Thanks very much for filling us then on this.

  • @cupidonihack
    @cupidonihack Місяць тому

    why did i not found your channel before ! some channel steal my view time, but yours get me 10 years younger discovering a new domain :) i feel i m going to buy some books and papers again ! thank you!

  • @peterhall6656
    @peterhall6656 Місяць тому

    It is decades since I delved into this stuff. You have excited me to crawl out from under the rock and read the papers.

  • @Ohmriginal722
    @Ohmriginal722 7 місяців тому +1

    Whenever randomness is involved you got me wanting to use Analogue processors for fast and low-power processing

  • @austincarter2177
    @austincarter2177 3 місяці тому

    I love how you explain things with no assumptions, but also don’t assume we know nothing. You let us walk in the field without getting lost in the weeds🤙

  • @novadea1643
    @novadea1643 2 місяці тому +1

    Very nice video and it's indeed a very interesting promising direction for many applications where it doesn't matter if it's not exact as long as the answer is correct to an acceptable error margin, I especially like the start with UE5 because games and especially graphics has been one area where using randomness and shortcuts to get "close enough fast enough" has always been a priority. It'd be absolutely amazing to have a RandNLA library with basically a "Speed Accuracy" slider.

    • @Mutual_Information
      @Mutual_Information  2 місяці тому

      I'm rooting for it. It may be awhile but considering the gains, I suspect it must arrive eventually.

  • @AMA_RILDO
    @AMA_RILDO Місяць тому

    I don’t know how long it took for you to create this video with fancy slides and examples, but everything is so well explained that I wish I had you as my teacher

  • @nonamehere9658
    @nonamehere9658 7 місяців тому +5

    The trick of multiplying by random S in argmin (SAx-Sb)^2 reminds me of the similar trick in the Freivalds' algorithm: instead of verifying matrix multiplication A*B==C we check A*B*x==C*x for a random vector x.
    Random projections FTW???

  • @plfreeman111
    @plfreeman111 6 місяців тому +1

    "And if you aren't, you're probably doing something wrong." So very very true. Don't roll your own NLA code. You won't get it right and it certainly won't be faster. The corollary is "If you're inverting a matrix, you're probably doing something wrong." But that's a different problem I have to solve with newbies.

  • @dylanwattles7303
    @dylanwattles7303 5 місяців тому +1

    Just found this channel, great work

  • @wafikiri_
    @wafikiri_ 7 місяців тому

    The first program I fed a computer was one I wrote in FORTRAN IV. It almost exhausted the memory capacity of the IBM machine, which was about 30 KBytes for the user (it used memory overloads, which we'd call banked memory today, in order to not exceed the available memory for programs).

  • @0x4849
    @0x4849 6 місяців тому

    Some small correction:
    At 4:50, assuming the plotted values follow y=f(x), f is actually not linear, since in the graph we see that f(0)/=0.
    At 8:22, you incorrectly refer to the computer's registers as "registries", but more importantly, data access speed depends much more on cache size than register size, as the latter can generally only hold 1-4 values (32-bit float in 128-bit register), which, while allowing the use of SIMD, is very restrictive in its use. A computer's cache is some intermediate between CPU and disk, which, if used efficiently, can indeed greatly reduce runtime.

  • @johannguentherprzewalski
    @johannguentherprzewalski 7 місяців тому

    Very interesting content! I did find that the video felt longer than expected. I was intrigued by the thumbnail and the promise of at least 10x speed improvement. However, it took quite a while to get to the papers and even longer to get to the explanation. The history definitely deserves its own video and most chapters could be much shorter.

  • @WhiteGandalfs
    @WhiteGandalfs 7 місяців тому

    Let me try to phrase it for people who have no math degree education, but rather engineering level: You effectively select the best fitting equations of the linear problem which is originally highly overdefined for your x vector to sufficiently represent the complete system with a small subset of the original equations. - correct? That's not directly "inducing random noise" but rather a simplification by omission of probably irrelevant equations.
    This reminds me of how we did such a scheme for a "bundle block adjustment" application: We used the drastic performance boost from simplification to do multiple simple bba within each reaction step of the system with different drastically simplified subsets from the data, to then compare the results with the expected outcome (low rest error, good alignment with the continuation of the coordinates of our x vector from the previous step), then performing a final selection based on those outcomes and then performing a final error minimizing solving with those perfectly selected equations. That gives the best from both worlds: Speed up but without sacrificing correctness.
    And there is no magic at all (and no "introduced random noise"). Just a "try simple" first iteration, then based on that a selected final iteration. Basically engineering optimization based on working standard linear algebra systems.

  • @JonathanPlasse
    @JonathanPlasse 7 місяців тому +1

    Awesome presentation, thank you!

  • @cannaroe1213
    @cannaroe1213 7 місяців тому +1

    Nearly 7 years ago when I was still a practicing geneticist, sequenced DNA would usually only be a few nucleotides long, maybe 50, and it would have to get mapped to a genome with billions of possible locations to test. The fastest algorithms ended up being used in the most published papers, so competition was pretty fierce to be the fastest.
    The gold standard was a deterministic program called BWA/Bowtie, but just before I left the field a new breed of non-deterministic aligners with mapping times orders of magnitude faster were developed, and it really split opinions. Different deterministic programs would give different results (i.e. they had noise/error too, even if they're consistent about it), so in many ways who cared if a program gave different results every time you ran it, particularly if you only intend to run it once...
    But there were other problems. You couldn't create definitive analyses anymore, you couldn't retrace someone else's steps, you couldn't rely on checksums, total nightmare.
    The "hidden structures" aspect of the paper was interesting, the structures are in the data, and how the algorithm interacts with the data, which as the programmer you don't have access to by definition - but you also kinda know all you need to know about it too. It feels very similar to making a good meme.

  • @psl_schaefer
    @psl_schaefer 7 місяців тому +2

    As always great (very educative) content. I very much appreciate all the work you put into those videos!

  • @ЕрмаханСабыржан
    @ЕрмаханСабыржан 7 місяців тому +1

    it's really mind-blowing how random numbers can achieve something such fast

  • @JHillMD
    @JHillMD 6 місяців тому

    What a terrific video and channel. Great work! Subbed.

  • @pythonguytube
    @pythonguytube 7 місяців тому

    Worth pointing out that there is a modern sparse linear algebra package called GraphBLAS, that can be used not just for graphs (which generalize to sparse matrices) but also to any sparse matrix multiplication operation.

  • @Geenimetsuri
    @Geenimetsuri 6 місяців тому +1

    I understood this. Thank you, great education!

  • @tildarusso
    @tildarusso 7 місяців тому +2

    Does it mean to random sample rows from the tall matrix, with low least square errors as loss indicator (Monte Carlo method?), then to work on later stages with the newly constructed "short matrix" - such as full rank square matrix with dimension of the original columns?

    • @Mutual_Information
      @Mutual_Information  7 місяців тому

      I see what you're going for, and that's in the style of some of the approaches taken when the field was younger. 'row-sampling' or 'column-sampling'. The problem is, the relevant information isn't uniformly distributed (a row of all zeros isn't the same as a row with wildly different numbers). So you have to mix the information from rows or columns together, and that's what is essentially done with the matrix multiplication SA.

    • @tildarusso
      @tildarusso 7 місяців тому

      @@Mutual_Information Agreed. Distribution shift is the bad guy behind. But uniform sampling is the best choice as you will never know the real distribution.

  • @maxheadrom3088
    @maxheadrom3088 6 місяців тому

    Nice video! Nice channel! The complicated part isn't multiplying ... it's inverting!

  • @General12th
    @General12th 7 місяців тому +3

    Hi DJ!
    I love improvements in algorithmic efficiency.

  • @HelloWorlds__JTS
    @HelloWorlds__JTS 7 місяців тому

    Phenomenal! But I have one correction for (25:33): Full rank isn't restricted to square [invertible] matrices, it just means rank = min(m,n) rather than rank = k < min(m,n).

  • @pygmalionsrobot1896
    @pygmalionsrobot1896 7 місяців тому +2

    Whoa - very cool stuff !!

  • @vNCAwizard
    @vNCAwizard 7 місяців тому +1

    An excellent presentation.

  • @EvanMildenberger
    @EvanMildenberger Місяць тому +1

    Steve Brunton mentioned! 🎉 His channel is awesome

  • @TheBojda
    @TheBojda 7 місяців тому +1

    Do you know LightOn's OPU (Optical Processing Unit)? It can do one simple thing, multiplying with a random matrix, but it can do it lightning fast (cause it uses light). It seems like ideal hardware for this.