Mojo - the BLAZINGLY FAST new AI Language? | Prime Reacts

Поділитися
Вставка
  • Опубліковано 8 тра 2023
  • Recorded live on twitch, GET IN
    / theprimeagen
    Demo
    • Jeremy Howard demo for...
    Fireship
    • Mojo Lang… a fast futu...
    MY MAIN YT CHANNEL: Has well edited engineering videos
    / theprimeagen
    Discord
    / discord
    Have something for me to read or react to?: / theprimeagenreact
  • Наука та технологія

КОМЕНТАРІ • 433

  • @jaysistar2711
    @jaysistar2711 Рік тому +992

    If you want any language have good performance numbers, then compare it to Python.

    • @ThePrimeTimeagen
      @ThePrimeTimeagen  Рік тому +256

      classic w

    • @anon-fz2bo
      @anon-fz2bo Рік тому +16

      Lol facts

    • @JakobKenda
      @JakobKenda Рік тому +54

      MY language (slj) is 4x faster than python (really)
      (it's 18x slower that C tho)

    • @suarezlifestyle
      @suarezlifestyle Рік тому +4

      Python is not fast lol, make a better research

    • @rjandonirahmana4326
      @rjandonirahmana4326 Рік тому +68

      ​@@suarezlifestyleexactly why you gotta compare it with python

  • @yeahmanitsmurph
    @yeahmanitsmurph Рік тому +272

    It’s not about the size of your SIMD it’s what you do with it

    • @ThePrimeTimeagen
      @ThePrimeTimeagen  Рік тому +75

      facts

    • @SpaceChicken
      @SpaceChicken Рік тому +35

      As long as it can measure dicts, I’m happy.

    • @RickGladwin
      @RickGladwin Рік тому +15

      @@SpaceChicken Look let’s not turn this comparison video into a dict measuring contest.

    • @MrR8686
      @MrR8686 Рік тому

      If it performance better in worst case, like for loops, isn’t that a benefit?

    • @tornado3842
      @tornado3842 Рік тому

      @@SpaceChicken Salutations, extraterrestrial avian

  • @farqueueman
    @farqueueman Рік тому +284

    I'm looking forward to mojo. Anything Latner touches turns to gold.

    • @kebman
      @kebman Рік тому +4

      Mhm how does it shake you?

  • @catcatcatcatcatcatcatcatcatca
    @catcatcatcatcatcatcatcatcatca Рік тому +274

    Can’t wait for python programmers to evolve into Mojo programmers who just never use any of the new stuff, but now can say the language they write in is using modern process optimisations and cache-efficient data structures.
    Kinda like C++

    • @aleph0540
      @aleph0540 Рік тому +5

      LMAO why violate them like that!

    • @aoeu256
      @aoeu256 Рік тому +9

      Did people forget about Julia & LISP? You could have easily use macros (compilers) to turn one high-level S-expression format into intermediate S-expression format into C S-expressions. Even in Python today you had @numba(nopython) to turn static looking Python that looks like C-code into C-code and compile it.

    • @aoeu256
      @aoeu256 Рік тому

      Why don't people write their code in terms of relational constraints. Like I need this and this and this, then use chatbots and solvers to generate code and custom hardware for your application? A good way of transforming your constraints to code is Lisp's s-expressions and quasi quoting which sounds like Julia.

    • @aleph0540
      @aleph0540 Рік тому +2

      @@aoeu256 Too complex, high-level scripting is the closest you'll get to that. You're talking about something that exceeds semantic language and symbolic language, conceptual language. Doable but will likely involve probability, don't think LLM are it though.

    • @vectoralphaAI
      @vectoralphaAI 11 місяців тому +2

      From Pythonistas -> Mojicians.

  • @mateusvmv
    @mateusvmv Рік тому +96

    This new watchmojo language is looking really cool, wish I could use it to compile rust

  • @catcatcatcatcatcatcatcatcatca
    @catcatcatcatcatcatcatcatcatca Рік тому +117

    Cache-line sized vectors being their own type is pretty brilliant idea. It probably allows even better performance compared to manually doing that, but also just reduces typing.

    • @ChamplooMusashi
      @ChamplooMusashi Рік тому +25

      Aligning types to the size of caches in general is a great optimization. People most often think of optimizations in terms of algorithmic order of n magnitude things but the reality is making something 8x faster is a substantial speedup. Making something even 30 or 10% faster will make something take about 10 less minutes if it takes an hour

  • @markusmachel397
    @markusmachel397 Рік тому +80

    Really cool that prime uploads these gems. I can't watch the stream since twitch is blocked by my work computer.

    • @ThePrimeTimeagen
      @ThePrimeTimeagen  Рік тому +23

      yayaya

    • @thingsiplay
      @thingsiplay Рік тому +6

      Me too. While Twitch is not blocked for me, I choose not to watch anything on Twitch. And I prefer downloadable and dedicated video snippets with proper title anywya.

    • @SmirkInvestigator
      @SmirkInvestigator Рік тому +1

      Yeah, my parasocial circuitry is probably broken. I just want the info nuggets. I appreciate the personality of a non-scripted live event though.

  • @copper280z
    @copper280z Рік тому +43

    Numba, a JIT compiler package for python, seems to do a good portion of what Mojo promises. I regularly get big speedups over numpy using it, particularly because it can auto-parallelize both native python loops and many numpy function calls.

    • @ckmichael8
      @ckmichael8 10 місяців тому +5

      That is basically Cython with some vectorization steriods, which can be implemented in Cython given engineering resources.

    • @yeetdeets
      @yeetdeets 9 місяців тому +5

      @@ckmichael8 But Numba doesn't require as high IQ. If you can use numpy you can get C-ish performance in a single function with just a decorator. It's finicky with any argument not a boolean, numerical or a numpy vector thereof though.

    • @ckmichael8
      @ckmichael8 9 місяців тому +3

      @@yeetdeets Yes you are right. I think the usecase for Cython and Mojo alike is for things that Numpy does not support yet, like new algorithms that cannot be efficiently expressed in existing Numpy funtions. If there is a numpy way of doing that than Numba is certainly the better way. But then for research things like new ML algorithms there maybe no existing implementation available at all so a Cython/Mojo implementation would be required.

  • @rybavlouzi
    @rybavlouzi Рік тому +27

    Good stuff dude, I find your content in the land of devs on UA-cam very unique. Keep it up!

  • @BlueCodesYep
    @BlueCodesYep Рік тому +1

    Can't wait for this sounds awesome, and preciate your videos dude always a fun watch.

  • @zuma206
    @zuma206 Рік тому +17

    Great content as always, keep up the good work man!

  • @Idlecodex
    @Idlecodex Рік тому +42

    Hey, on tilling: this is necessary to keep the processor cache hot. The classical example is inverting the index of the two loops in a matrix-vector multiplication. The parallel algorithms for the same operation can be tuned by sizing the chunk of the matrix your are operating on. This becomes even more critical when you add another level of locality by using an accelerator like a GPU or when working in a MPI cluster.

    • @spicybaguette7706
      @spicybaguette7706 Рік тому +2

      You gotta feed the beast, especially when you're extracting all the juice out of your CPU with SIMD

  • @spazioLVGA
    @spazioLVGA Рік тому +28

    Amazing stuff. Still I wonder...how is it fair to compare Mojo with plain python when numpy is basically a part of python itself at this point? Numpy often outperforms even Julia (for large arrays).

    • @Navhkrin
      @Navhkrin 5 місяців тому +2

      They also have comparisons to optimized numpy implementations and still achieve 2.5x over numpy. Also note that Mojo is being built as a hetegenerous language, meaning that it should be pretty straightforward to utilize GPU or other accelerators. Having all this in one single coherent package is a very big deal

  • @jaysistar2711
    @jaysistar2711 Рік тому +20

    f32 is directly supported in almost all SIMD ISAs. f64 reduces the number of components (in 128 bits, you can have 4 32 bit float, but only 2 64 bit floats).

    • @ThePrimeTimeagen
      @ThePrimeTimeagen  Рік тому +10

      ah, very interesting

    • @fakenameforgoogle9168
      @fakenameforgoogle9168 Рік тому +8

      @@ThePrimeTimeagen a lot of ML programs use F16 as well but that might be more related to memory savings than speed

    • @jaysistar2711
      @jaysistar2711 Рік тому +1

      @@fakenameforgoogle9168 While it's obvious that it's smaller, the real savings is in terms of speed. GPUs commonly use F16 for both reasons.

    • @isodoubIet
      @isodoubIet Рік тому +1

      @@fakenameforgoogle9168 Even f8 recently

  • @samhughes1747
    @samhughes1747 Рік тому +32

    In Rust, you can pin a shared buffer, and dispatch slices from it to each core. That’s basically what I’m expecting that Mojo code to actually be doing.

  • @Imaltont
    @Imaltont Рік тому +8

    Kind of reminds me of Common Lisp with one of the several approaches to integrate python. For integrating python in a faster environment ofc, not syntax. SBCL even has a lot of nice SIMD stuff, native threads and green threads, has nice interactive developement and debug tools. You can also optionally declare types, which does impact performance. It could use a better package manager and some better project tools.

  • @djixi98
    @djixi98 Рік тому +12

    did some 5min optimizations by using numpy and got it to be 1400-1800x faster than the example he provided. Still, if i can continue to code in python and make it faster, and have strong types, then i see this as an absolute win lol

  • @ryanfav
    @ryanfav Рік тому +18

    If it lowers the difficulty in being able to make code that can run on GPU's and mixed use cases, I'm all for it, still it being signup only feels very weird right now,

    • @SpaceChicken
      @SpaceChicken Рік тому +1

      You sign up for the playground right now if I read the site correctly, not the language. It made more sense to me after trying to jump in myself

    • @zeyadkenawi8268
      @zeyadkenawi8268 Рік тому

      hopefully not gonna be proprietary

  • @ruanpingshan
    @ruanpingshan Рік тому +6

    Anyone else notice that their Python performance benchmarks are for Python 3.10? Python 3.11 is supposed to have some major speed improvements.

    • @aoeu256
      @aoeu256 Рік тому

      For Python 2 you could have used psyco, there is numba, and julia and stuff

  • @user-jm5is4fd3q
    @user-jm5is4fd3q Рік тому +28

    My team works on Tammy AI. Does Mojo has an API we can test?

  • @Chalisque
    @Chalisque Рік тому +8

    Basically, if you exclude a chunk of what Python can express, what remains can be made very efficient. So add a little syntax to allow you to ring-fence stuff that you want optimised. Makes a lot of sense.

  • @fanshaw
    @fanshaw Рік тому +9

    Python is chosen because of its ease of use and libraries which take care of things for us. If we add all these specialist language constructs back into it, have we just undone that ease of use; is it still easily understandable; or does it provide a reasonable pathway from noob to expert?

    • @Loanshark753
      @Loanshark753 4 місяці тому

      Probably the idea is that it can be used for those creating the language and libraries. Currently many python libraries are implemented in C, C++ and Fortran. Therefore if it is possible to write fast Mojo code, the library could just be written in that reducing. The hurdle of linking different programming languages.

  • @oeerturk
    @oeerturk Рік тому +13

    i think mojo is basically a proof of concept/best showcase for MLIR and what better accessible lang to superset to be honest. very exciting project and also very curious and excited about what mlir can accomplish for other languages.

  • @olafbaeyens8955
    @olafbaeyens8955 Рік тому +5

    It is only very fast if you have very fast hardware fore it.
    Auto-tune may work in the bootstrap code that measures which settings gives you the fastest result. And maybe this could be changed dynamically during the running of your project.

  • @kennethbeal
    @kennethbeal Рік тому

    You are fun. Thank you for bringing a smile to my face.

  • @MohamedElzahed89
    @MohamedElzahed89 Рік тому +26

    He's well known person in deep learning community but I would say in order to compare you could compare numpy vs mojo for matrix multiplications , dot products , etc .

    • @EwanMarshall
      @EwanMarshall Рік тому +7

      And try comparing to a cython compile too.

    • @MohamedElzahed89
      @MohamedElzahed89 Рік тому +1

      @@EwanMarshall yea its always easier said than done, but lets hope that works

    • @aoeu256
      @aoeu256 Рік тому +1

      Also @numba.jit(nopython=True), @rpython, @julia (if it exists)...

  • @magfal
    @magfal 7 місяців тому +1

    7:20 I have used it a lot, through a custom Postgres extension written in Rust using the awesome PGRX framework.
    When you've got a good fit for SIMD and the tools to easily apply it the performance improvement is like going from Python to C#

  • @u9vata
    @u9vata Рік тому +26

    No, you are wrong about f32 and f64. On the CPU side of things (which the example is on), the float is always faster and all games, all cad, all highperf code always use float instead of double unless precision errors crop up. This is even true without simd, but with simd things are even worse because you can either do 2x as many operations on floats or 1x as many double, which is literally halving the speed!
    Also in the past there was a case when doing 8 byte loads were slower than 64 byte ones because certain (in most cases risc) CPUs could not even address smaller memory. But even there this have stopped at 32 bit usually so 32 bits you can manipulate even as integers just like you would do the 64 bit ones even on arm usually. So even for integers it is worthwhile to use 32 bits instead of 64 bit - for example there are builds of 64 linux kernels that enable 32 bit pointers and those are much better if you are not having more than 4gb of memory but otherwise want to do 64 bit ops usually. Also in many highperf codes of mine I tend to store indices instead of pointers because indices can be stored on 32 bit and thus your "pointer-ish" part of the data eats literally half of data cache - and yes memory is plentiful, but caches are very limited.
    What you say had some merit in the past, but in current state of the art highperf optimized codebases it is actually a bad advice to use double - unless of course float errors kill your alg.
    The story also highly differ on GPUs, but traditionally GPUs also massively operate on float and not double so in most cases its faster there too. I do not follow all architectures on the gpu side of thing for cuda and all though, but there sometimes using the bigger type is better - like on GPUs that do not support float16, those will be emulated with float32 which is bad. If the GPU does support float16 and suchformats however those can be immensely more faster for machine learning if float errors let you do that so codes usually just ask the APIs if there is support and if they can use 16 bit floats they do.
    Its good to have this language because python is extraordinarily slow.... Extremely... Only good for glue-code kind of fast hacking but sometimes even the glue part is slow so this is good development. I don't know what are the results though if he would compare to lets say cpython or something that is compiler ahead to time...

    • @Ty4ons
      @Ty4ons Рік тому +3

      From what I know about GPUs FP32 is the main focus in gaming while PF64 is for compute applications. Gaming cards often lock down FP64 performance so you need to buy a workstation card to get the full performance. Sometimes different architectures are used like right now for AMD RDNA is optimized for gaming with slow FP64 while CDNA is optimized for compute with very fast FP64. I think some consumer GPUs like Intel's have no hardware FP64 so it isn't used much in client applications.
      It is my understanding that lower precision is becoming more and more important thanks to its use in machine learning with architectures getting improved performance for FP16, INT8 and even FP8 on Nvidia Hopper.

  • @samhughes1747
    @samhughes1747 Рік тому +1

    I’m sure it’s already been pointed out, but SIMD instructions are sized specific to the registers they can handle, and some architectures aren’t actually flexible-if you don’t have data that fills the register when issuing on a GPU, then you pad with 0’s.

  • @nekomakhea9440
    @nekomakhea9440 Рік тому +1

    25:00 "will work on exciting projects like Excel spreadsheets, data entry, and *building hyper-intelligent armed robots* "

  • @VivBrodock
    @VivBrodock 2 місяці тому +1

    ngl as someone learning python as part of my topology degree mojo looks really tempting
    especially once they busted out the Mandelbrot

  • @jackalph.d801
    @jackalph.d801 9 місяців тому +1

    This is the first I'm hearing about mojo. I wonder how it compares in performance to julia, which is really supposed to target the same audience (in most ways) and has been around for a little bit longer. I have used julia for a while and it is very easy to write incredibly performant code, often the compiler is good enough to do some basic simd on the arrays you pass in. I would love to see how they go toe to toe.

  • @Kknewkles
    @Kknewkles Рік тому +14

    Hey Prime, serious question(forgive me for asking before watching the video): I've heard Mojo's aiming to replace Python as the AI "vehicle" language - but what's the point if the heavy lifting is done by the CUDA/GPU stuff? How much realistically(5, maybe 10%) can you speed up by replacing the non-GPU related things?

    • @TCH534
      @TCH534 Рік тому +1

      CUDA is going to be for the AI work in python.

    • @Kknewkles
      @Kknewkles Рік тому +2

      @@TCH534 yes, the "heavy lifting" I referenced. All the big bulky matrix multiplication stuff is done on GPUs, and is the vast majority of any workload. Python is there just as a high-level script for ease of use.

    • @jereziah
      @jereziah Рік тому +4

      it's not free to call other languages, esp from python, nor are the type conversions (which is why the 'serious' python libraries force you to commit to types). The beauty of mojo will be for researchers to set fire to fewer trees, with less effort.

    • @Kknewkles
      @Kknewkles Рік тому +2

      @@jereziah that's not gonna make the whole thing thousands or even hundreds of times faster.
      It's gonna make 5-10% tens, maybe hundreds of times faster.

    • @markusmachel397
      @markusmachel397 Рік тому +2

      @@Kknewkles isnt like 10% a huge improvment whe you are dealing with gigantic models?

  • @pif5023
    @pif5023 Рік тому

    Is there a book that explains these higher concepts like vectorization, simd, unrolling, … or are they just the result of experience and mathematical reasoning?

  • @mario7501
    @mario7501 Рік тому +7

    This is awesome and I'm really looking forward to when it gets released. But it is a marketing stunt. You should compare it to something like numpy with multithreading. Probably still 10-50x faster, but no one who has the slightest idea about numerical calculations in python uses for loops.

  • @RipazX
    @RipazX Рік тому +2

    I can't wait for the next stage of new programming languages, like Writescript, an superset of Typescript; C+++ an superset of C++ and lets not forget; GoGo an superset of Go.

  • @some1and297
    @some1and297 9 місяців тому +1

    5:45 most operations you do with just floats (if you are actually writing low level fast code) is probably going to be memory bottle necked so even if in theory an operation would take a few extra machine cycles, from my understanding it could still be potentially faster to use f32's because they theoretically take up half the memory bandwidth of a f64.

  • @jesse9999999
    @jesse9999999 Рік тому +19

    at the beginning of this video i'm really hoping this makes me want to use my mojo playground access, but there is fear in my heart

  • @grantwilliams630
    @grantwilliams630 Рік тому +2

    I also wrote a ton of heuristic optimization algorithms like 8 years ago, but mine were in Matlab...

  • @mohammedalmahdiasad6832
    @mohammedalmahdiasad6832 Рік тому

    yes austin power keep coming back to my head in the last week or so

  • @nexovec
    @nexovec Рік тому +1

    That's a VERY sexy product they got there. I really need my manual memory management and the syntax is still stupid, but it's a step in the right direction I think.

  • @petereriksson6760
    @petereriksson6760 Рік тому

    Can you compile it to something shippable or do you have to ship the source or have it run in the cloud as with Python?

  • @PurpleDaemon_
    @PurpleDaemon_ Рік тому +1

    22:53 just to note, python have slots for static classes.

  • @di4352
    @di4352 Рік тому +2

    Did I miss something, where in the code examples did it imply a +35,000X speedup, I'm only seeing +4,000x at most, not a dig, but just where is it? Also, does the +14x speedup imply that the differences in hardware between Fireship and the other dev's computers at compilation affected the outcome of their code tests?

    • @ParanoicoBR
      @ParanoicoBR Рік тому

      it's kinda fishy but the 35,000x speedup was for the mandelbrot set algorithm which they glossed over

  • @henriquemarques6196
    @henriquemarques6196 Рік тому +3

    as a web developer, this video seemed like Egyptian hieroglyphs to me ngl

  • @yyy5523
    @yyy5523 Рік тому +3

    We can use numoy and do that matrix multiplication without using any for loops. Also numpy lists is faster than normal lists.

    • @DataPastor
      @DataPastor Рік тому +4

      True, but numpy is written actually in C, C++, Cython and Fortran, and this is the point. How can you author fast Python libraries/code without using these languages.

    • @yyy5523
      @yyy5523 Рік тому +1

      @@DataPastor yes

  • @Rin-qj7zt
    @Rin-qj7zt 10 місяців тому

    The autotune feature was low hanging fruit. I'm actually stunned its a new concept to people because it's such an old one for me. I just assumed the compiler was already doing it.

  • @LukeDickerson1993
    @LukeDickerson1993 Рік тому +1

    would it be possible for an AI to track your eye movements to see which live-comment a streamer is reading, and hold it in view as unread comments scroll by?

    • @spell105
      @spell105 6 місяців тому +1

      You don't need AI for something like that. Eye tracking has been around forever.

  • @NathanaCentauri
    @NathanaCentauri 18 днів тому +1

    @13:00
    I felt like there was a code that just entered your being. And you had that revelation of macro expanding life altering script that subtly changes ones life. Not all in one call but gently nudgingly like a hash or crc or plane like CUDA where the code of the Jedi master is shared with everyone everywhere with everything .. EVERYTIME

  • @Chalisque
    @Chalisque Рік тому +1

    I'm not a CPU expert either. But with regards to float32, the FP is done in the SIMD registers. Most likely Mojo will convert things to use packed SIMD where possible, and you can fit twice as many FP32s in a SIMD register as FP64s. Loading a single FP32 or FP64 is likely memory bound, so using FP32s means you can have more in cache at the same time. I guess an expert (i.e. not me) doing proper benchmarks will give a better picture.

  • @fg786
    @fg786 Рік тому +5

    I'm a simple one I just wonder will we ever run out of names for new programming languages?

    • @sandworm9528
      @sandworm9528 Рік тому +4

      cat ~/Politics/new_genders.txt >> ~/Programming/language_names.txt

    • @tetsuoiiii
      @tetsuoiiii Рік тому

      Just recycle them, as these junk languages will never catch on.

  • @quachhengtony7651
    @quachhengtony7651 Рік тому +2

    You know the saying, "if it's too good to be true..."

  • @9SMTM6
    @9SMTM6 Рік тому

    Maybe "MLIR" is something else, but I would guess that this is already happening with most languages, and ESPECIALLY it's already happening - and unavoidable - on GPUs?
    Like Rust, and also clang and most other modern compiled languages I know of, has its own intermediate representation. It does some optimizations on that, and then that goes to LLVM, another intermediate representation, LLVM then compiles to machine code.
    Thats 2 intermediate representations already.
    Then, when you want to run stuff on a GPU, modern APIs like Vulcan, Metal or Dx12 are targets from some default shader language that has a first party compiler (often C dialects) but there's also other frontends, like eg Rust - experimental community project - , but that then gets compiled to an intermediate language that is part of the API, for Vulcan that is SPIR-V, and that then usually gets compiled again to a vendor specific representation that actually gets executed.

  • @savorsauce
    @savorsauce 11 місяців тому

    If Mojo can support these speedups even just with the base 8x gains. It could save lots of money when it comes to heavy computations when training big models.

  • @l0ad1
    @l0ad1 Рік тому

    float32 and float64 are computed in amd64 in the same FPU with 80 bits of precision, so time (latency) to compute is the same, but, if aligned properly, the cpu can parallelize internally, so you always can do twice as much float32s than float64, thus float32 gives a higher throughput generally, but both have the same latency.

    • @nakedsquirtle
      @nakedsquirtle Рік тому

      plus the point of SIMD is to do identical operations on multiple values, and if you are using a f64 rather than f32 that takes up twice the amount of slots for the SIMD operation

  • @arcanernz
    @arcanernz Рік тому

    I would like a Typescript version of mojo please where the types actual does something during runtime.

  • @TankorSmash
    @TankorSmash Рік тому +1

    The Fireship video just came out of few days ago, and we're already reuploading it :(

  • @codecubix
    @codecubix Рік тому

    but can you minifiy python codes in case it replace JS in the browser?

  • @anon-fz2bo
    @anon-fz2bo Рік тому

    its pretty cool tbh, both swift and llvm are huge & cool in their own right too.

  • @mvargasmoran
    @mvargasmoran Рік тому +2

    "What kind of BS measurement are they doing?" best question ever.

  • @crides0
    @crides0 Рік тому +3

    If it has good type system, reasonably fast (compared to C) and doesn't have a bunch of features in it then it should be a fine language

  • @Dev-Siri
    @Dev-Siri Рік тому +1

    since mojo is just python.
    Python devs can safely put 10+ years of Mojo experience on their resume.

  • @AsbestosSoup
    @AsbestosSoup 7 місяців тому

    Has anybody found a mojo vs rust performance benchmark testing rust's worst-case to a mojo implementation and vice versa?

  • @FirstNameLastName-eo2pq
    @FirstNameLastName-eo2pq Рік тому +2

    I think for the 32/64 bit operations.... All depends what the compiler does.... Thought on a 64-bit machine the compiler may simply use 64 bit float under and be done with it. I remember and was surprised at work going from 16 to 32 bit was little or no problem. When moving to 64bit machines *EVERYTHING* had to be on a 64bit word boundary, when not... BAM! EXCEPTION!!!! So the easiest solution was to tell the compiler to automatically align everything internally on a 64bit alignments, most everything will work, while if i'm remembering, some places the auto align didn't work but more often when doing bit operations.... Was working in c/c++ so could be were some cases where it seemed remembering most of the problems if i'm remembering correctly struct/unions/classes are , given i was working in C++ at the time, would think MOJO or other things these days, might be a hidden detail under the cover they simply do and fix up for you automagically.

  • @playea123
    @playea123 Рік тому +23

    I don’t get the criticism that they picked a bad feature of python to compare against (i.e. for loop). In my mind, it’s fantastic that improves on what python does badly. I don’t see why anyone would use it if it just improved on libraries that already use C or C++ under the hood like bumpy or pandas. The whole idea for me is that, by using mojo, you get a better version of python in all cases and especially the most basic ones (e.g. just a simple for loop) without having to learn any new syntax. I think mojo will be great for already good python developers who already use type hints. Although I am a bit salty that mojo doesn’t use the same case for types as Python (e.g. Int vs int). I don’t think mojo is trying to replace Rust or C++. The jobs current python users most do simply isn’t the same as what Rust and C++ users do (unless for some reason you work at a company that uses python for backend or game engine development). Mojo is supposed to make data engineering, data analysis, data science and ML work better. No one was really using Rust for that.

    • @fueledbycoffee583
      @fueledbycoffee583 Рік тому +1

      I actually work my backend in python! Our web server uses flask and is pretty good! De dx of making backend with pythong is amazing. And IMO alot better than javascript. Also the production web servers if i am correct are written in c++. So you dont get as much of a performance penalty

    • @playea123
      @playea123 Рік тому

      @@fueledbycoffee583 I would prefer a backend in python than JS too, but I know that doesn’t scale well and the dynamic typing is problematic. Not saying your company is bad or anything. It’s just that massive companies aren’t likely to extensively use Python on the backend.

    • @theairaccumulator7144
      @theairaccumulator7144 Рік тому

      It's closed source

    • @fueledbycoffee583
      @fueledbycoffee583 Рік тому +1

      @@playea123 we do have a rule: All python must be written in a typed way. We extensivly use data classes, enums and validators so we do shoot ourself in the foot as much as posssible. Since our backend is a big thing we must doit that way because without it, it would be a tangled mess.

    • @fueledbycoffee583
      @fueledbycoffee583 Рік тому

      Ironically we dont use typescript because we arrived to the conclusion that typescript is not a good type system for js. Is hightly subjective but we dont enjoy the typesystem of TS. We go along pretty ok with vanilla js.

  • @user-lg7td1he3s
    @user-lg7td1he3s Рік тому

    which language do you think is the fastest the most performant rust c c++ mojo "its defiantly not mojo" or another language entirely

  • @Septumsempra8818
    @Septumsempra8818 11 місяців тому

    "Why are he robots looking at the keyboard?"

  • @chrisroberts1773
    @chrisroberts1773 Рік тому

    Loved 'Mojo programmer, must have 10 years experience'.

  • @sorry4all
    @sorry4all 11 місяців тому

    I've never felt so craving for a language. Python with struct, but also without ; and {.

  • @SmirkInvestigator
    @SmirkInvestigator Рік тому +1

    O wow, I remember looking at neural-js 10 or 8 years ago. It was beyond me so I put it back.

  • @vitluk
    @vitluk Рік тому +1

    This is really cool! I might finally have a reason to use "python" again

  • @pieter5466
    @pieter5466 7 місяців тому

    13:144 Would love to know why you don't use the term "developer experience" - as well as perhaps certain other terms.

  • @chrisochs7112
    @chrisochs7112 Рік тому

    Unity does something similar with their burst compiler. In both cases these solutions are solving within the constraint of being a superset/subset of another language.
    The approach has pros and cons. You are carrying all the baggage of the language you are extending with you. And you lose some breadth in random places vs not just building something like this on top of a good general purpose language that is already performant like Rust/C++.
    But it's not like they had a lot of choices either. Build your own vs what? It's really C++ or Rust and from there you run out of good options really fast.

    • @nakedsquirtle
      @nakedsquirtle Рік тому

      Most Python jobs and C++/Rust jobs do not have as much overlap. The data scientists and such that are using Python will benefit greatly by integrating Mojo into their system.

  • @kebman
    @kebman Рік тому

    OMg fireship really gets my mojo on!1

  • @thekwoka4707
    @thekwoka4707 Рік тому

    I think you're wrong on the float 32 thing.
    At least generally. A compiler can actually recognize that the numbers are 32 bit and pack 2 into a single 64bit registry, which can improve performance.

  • @JonitoFischer
    @JonitoFischer 11 місяців тому

    float 32 and float 64 are handled by the FPU, no masking operations required

  • @theyashbhutoria
    @theyashbhutoria 3 місяці тому

    16:15 Yeah robots can just relay the text wirelessly from their "brain" to any other system.

  • @wld-ph
    @wld-ph 5 місяців тому

    Mojo Python is compiled, at the moment WSL if you have Windows. Waiting for a Windoze version, but RW read rights twins... now have X as a buddy (what a Twatter MuskRat?)

    • @wld-ph
      @wld-ph 5 місяців тому

      (Scrolling) HelloWorld runs so much faster... lol...

  • @eriklundstedt9469
    @eriklundstedt9469 Рік тому

    What happened to using lisp to do AI programming
    It's easier to read, write and learn, AND it's got documentation as part of its function definition, and Im not talking about comments here, I'm taking about the fact that you can put a regular "string" as part of your function definition and have it show up as the description in the LSP-UI
    Can you tell that I use lisp for practicaly everything (including building my own website)?

    • @isodoubIet
      @isodoubIet Рік тому

      Lisp is an unreadable mess that nobody but the hardcore set of true believers ever actually liked, that's what happened to it.

  • @recursion.
    @recursion. Рік тому

    By the end of this year, there will be new language called bojo

  • @mihalious
    @mihalious Рік тому

    4:11 I've heard that new fancy algorithms in spite of having better big O complexity, not really cache-friendly so in practice standard algorithm won't be slower at least.
    But not 100% sure that it's true.

  • @x1expert1x
    @x1expert1x Рік тому +1

    Every time Prime streams I just imagine a ranch with a bench and Prime sitting on it, caught in a 2-hour heated debate with himself, trying to desperately convince a field of grazing donkeys what the best software engineering paradigm is. Only kidding, but the "unga bunga" chats made me think of that, love your streams my man!

  • @excelfan85
    @excelfan85 Рік тому +1

    Thanks @ThePrimeagen I just burned all my rust lang books and am eagerly awaiting your mojo merch and future everything becomes a mojo convo, also can we agree that we measure dicts with the same measurement we use for horses. Hands, how many hands is your dict?

  • @philfernandez835
    @philfernandez835 Рік тому

    keep it softcore baby. showtime after dark over here

  • @KarimMarbouh
    @KarimMarbouh Рік тому

    good stuff

  • @JoaoVitorBRgomes
    @JoaoVitorBRgomes Рік тому

    6:22 machine learning works better at float32 instead of 16 or 64 because it balances precision and memory usage.

  • @MungeParty
    @MungeParty 3 місяці тому

    Notice how much code you got done without types in that JavaScript code base.

  • @kebman
    @kebman Рік тому

    This got my mojo on. I've got 10 years experience with this language, and I'm looking for a worthy project.

  • @morgan0
    @morgan0 11 місяців тому

    there’s a nim dsl which lets you compile python mostly as is, but i haven’t used it, and nim is a lot like python. also nim has existed for a while and has a bunch of libraries, whereas mojo is starting fresh

    • @morgan0
      @morgan0 11 місяців тому

      also there’s a nim library to import the c backend lib for python code and interface with it in nim, and a library that does that with all the python standard library. afaik it doesn’t require using a pyobject type, that gets done for you, but i haven’t used it yet

  • @atomgutan8064
    @atomgutan8064 9 місяців тому

    Do they know about the frickin numpy library?

  • @Chalisque
    @Chalisque Рік тому +4

    The other thing to consider is that by restricting this stuff to a subset of Python (in terms of what can be optimised), and not allowing precise low-level stuff like e.g. C, there are potentially more optimisations available since Mojo doesn't need to worry about pointers and other stuff. (Roughly: The more your language can do, the less the compiler can assume about a program's behaviour.) The average data scientist using this could quite likely end up with something faster than what your average C programmer could do in C (as said average C programmer likely knows less about optimising numerical code than the authors of Mojo). I look forward to seeing where this ends up.

  • @adambickford8720
    @adambickford8720 Рік тому +1

    Autotune feels like its essentially JIT?
    Also, once you understand the superset, is it really python? Type systems, structs, etc aren't just some syntactical sugar and take a bit of learning to truly understand.

    • @johnwu5908
      @johnwu5908 Рік тому

      If it provides 8x speed boost without modifying original code as they said, then there's not much barrier to transit into mojo imo

    • @RickGladwin
      @RickGladwin Рік тому

      It seems like the main difference between JIT and autotune, as I understand them, is that JIT will do extra compiling work and cache the results at runtime, based on what parts of the code are being run in the interpreter most often and thereby using up redundant processing by being interpreted over and over, whereas autotune is actually compiling a given section of code a few different ways at compile time, measuring what the performance is like on that particular system, and including that in the rest of the compiled code.
      I’m not an expert in either feature though, and the Just In Time compiler implementation probably varies across languages.

  • @draakisback
    @draakisback 11 місяців тому +1

    This is so funny. There are languages like rust and Julia which had bindings too all sorts of neural network frameworks which are really fast. I don't know why these people think it's a great idea to reinvent the wheel without using these other languages first. Julia is almost as fast as c in some cases and it's got all sorts of really cool symbolic math features in it

  • @daysofgrace2934
    @daysofgrace2934 Рік тому

    Python is versatile, If you need speed you have Cython, if you need a full stack web-app platform then you have Anvil run Python in the browser and server side, you Numpy, TF, PyTorch, Pandas, Plotly. I hardly use C/C++/VB. SQL stood the test of time, still using that since the 90s...

  • @sankalpmukim1052
    @sankalpmukim1052 Рік тому

    Doez Mojo come with an HTTP Server implementation???

  • @JSiuDev
    @JSiuDev Рік тому

    21:00 I watched all the way here and was about to install it ... WT ... LOL🤦‍♂

  • @puppergump4117
    @puppergump4117 Рік тому

    4:30 this is what gpus are good at, just use a shader to multify it.

  • @christianpaulus5311
    @christianpaulus5311 11 місяців тому

    15:33 "The servo mechanisms in my neck are designed to approximate Human movements. I did not realize the effect was so distracting." Data

  • @nadiaezzarhouni300
    @nadiaezzarhouni300 9 місяців тому +1

    Imagine if you did that level of optimization in assembly 😂 the processor will be chilling at cold temperatures in the corner and you brain and fingers will catch on fire 😂 , imagine doing it in native binary instead 💀 but it will be rewarding in terms of performance

  • @ashartariq5122
    @ashartariq5122 10 місяців тому

    Im i just stupid or does fireship talk at 1.5x speed.