Rust Artificial Intelligence (The Simple Way)

Поділитися
Вставка
  • Опубліковано 28 лис 2024

КОМЕНТАРІ • 114

  • @MrKeebs
    @MrKeebs 2 роки тому +53

    “This is probably the most undersubscribed Rust channel I’ve seen in a while. Please all of you go out there and spread the word so this guy gets millions of subscribers” is my version of the opening dialogue 😊
    Thanks again for more amazing content

    • @codetothemoon
      @codetothemoon  2 роки тому +3

      LoL thanks so much for the kind words MrKeebs!

    • @Moof__
      @Moof__ 2 роки тому

      what are some other rust channels you would recommend?

    • @chris-pee
      @chris-pee 2 роки тому +1

      @@Moof__ "No Boilerplate" is good, even if a bit cult-like.

  • @spaceyfounder5040
    @spaceyfounder5040 2 роки тому +34

    I'd love to try it out 🙌. I hope Rust will get more popularity in the AI domain soon.

    • @codetothemoon
      @codetothemoon  2 роки тому +4

      It's worth a try. and me too!

    • @ollydix
      @ollydix Рік тому

      AI researchers are too dumb to write rust 🤓

  • @CedevismoLiberale
    @CedevismoLiberale 2 роки тому +1

    This comes at the right time with the right project for me. I just finished a Telegram bot that used OpenAI. I'll totally go with this solution to cut on costs. I cannot thank you enough for sharing this with the world.

  • @abstractqqq
    @abstractqqq 2 роки тому +4

    Overall great video. I can think of a few business reasons people are not deploying any bots like this: 1. Bot may not be able to answer business specific questions well, because the business questions may not be in the training set. 2. In order to solve point 1, one has to somehow retrain the bot (for every business the bot is deployed to!), but businesses may not be willing to do so because it takes time and money and they have to hand over their call transcripts. 3. This change will impact head count in a big company. We all know what that means. So is only applicable for smaller companies, who don't necessary have that many calls..... Either way, it is cool technology, but I don't see the business case. I believe in market efficiency to a certain degree: if it is really that great, someone should have done it or is doing it, given that the technology is not new and everything is open source as you mentioned.

  • @isheanesunigelmisi8400
    @isheanesunigelmisi8400 2 роки тому +12

    I've been building a customer support tool with GPT Neo, this stuff is very powerful

    • @codetothemoon
      @codetothemoon  2 роки тому +1

      Nice! What are the biggest challenges you've encountered so far?

    • @isheanesunigelmisi8400
      @isheanesunigelmisi8400 2 роки тому +1

      @@codetothemoon so I'd like to be able to answer any customer question based on any amount of documents that the business has input without needing to rely on keywords to narrow down the documents so basically a good semantic search to narrow results down and GPT can phrase this all properly

    • @luv2stack
      @luv2stack 2 роки тому +2

      @@isheanesunigelmisi8400 Good luck with that..

  • @glennmiller394
    @glennmiller394 2 роки тому +10

    Thanks!

    • @codetothemoon
      @codetothemoon  2 роки тому

      Wow thank you so much Glenn!!! I really appreciate your support of the channel!

  • @viktorklyestov2108
    @viktorklyestov2108 10 місяців тому

    This video deserves more views

  • @kwako21
    @kwako21 2 роки тому +1

    The space background! I like it!

    • @codetothemoon
      @codetothemoon  2 роки тому

      Thanks! Was worried it might be a little distracting, glad to hear it didn't miss the mark 🙃

  • @CodeWithCypert
    @CodeWithCypert 2 роки тому +5

    This was a really great video. Thanks for putting it together!

  • @dgurnick
    @dgurnick 2 роки тому +2

    Exceptional. Thanks for this.

    • @codetothemoon
      @codetothemoon  2 роки тому

      Wow thank you so much for your support Dennis!! Really happy you liked the video

  • @mikkel3135
    @mikkel3135 2 роки тому +5

    Would be interesting to see if this could use the newer Bloom model. Any reason it wasn't? Hardware?

    • @codetothemoon
      @codetothemoon  2 роки тому +6

      Great question, I'm currently looking into this. What I do know is that rust-bert seems to accept model files of a different format than the Transformer library in the Python world, and Hugging Face only offers models in ".ot" format for a small subset of all the models they offer. They don't offer the .ot format for Bloom - I'm not sure if that is because nobody has gotten around to doing the conversion or if there is some technical limitation within rust-bert that precludes such a thing.

  • @joshespinoza3349
    @joshespinoza3349 Рік тому

    what is generationg your "quick fixes" options.
    mine only ever show "no quickfixes found"

  • @keno9757
    @keno9757 2 роки тому +1

    good work king, love you

  • @kaan608
    @kaan608 2 роки тому +2

    Great content as usual. Tinkering with AI and Solana/Rust for the last two years i agree the possibilities are limitless, the problem is making viable apps for real life use cases with limited resources

  • @netify6582
    @netify6582 2 роки тому +3

    Very interesting. Is there any way to speed up the text generation? Considering amount of processing power and time this is not really practical as it is.

    • @codetothemoon
      @codetothemoon  2 роки тому +4

      I think having a CUDA capable GPU (I think that means Nvidia only, somebody keep me honest here) is the best way to speed things up. To your point - I think that would be the only way to realistically use this in production. I haven't tried it locally, but if you see how quickly Hugging Face and OpenAI respond in their web interfaces, it seems like it's near instantaneous.

    • @oxey_
      @oxey_ 2 роки тому +5

      compiling with `--release` for starters will help a lot :p

  • @DanForbesAgain
    @DanForbesAgain Рік тому +1

    This is such a cool and exciting video, but I cannot run this demo on my machine, which has 16G RAM. I ran it from the console and had a system monitor open - it consumed all 14.2G of RAM that my system had to offer. Could you maybe do an updated version with Stanford Alpaca or something, and also maybe talk a little bit about how a developer could go about modifying this to create something new?

    • @codetothemoon
      @codetothemoon  Рік тому +2

      thanks! Yeah this model is resource hungry! There seem to have been a ton of developments in the space since this video was made, and I definitely plan on doing more on the topic.

    • @DanForbesAgain
      @DanForbesAgain Рік тому

      Amazing! Looking forward to it 👀

  • @pabloqp7929
    @pabloqp7929 2 роки тому +2

    great content mate, keep it up!

  • @martmcmahon
    @martmcmahon 2 роки тому +2

    as of oct 30, 2022 brew gives a warning that libtorch is deprecated and recommends using the pytorch package instead.

    • @joaojunqueira4445
      @joaojunqueira4445 2 роки тому +2

      I Can't execute cargo run, did you manage how to solve?
      Here's some lib torch error

  • @TheDuD52
    @TheDuD52 Рік тому +1

    I'm planning to make a personal AI assistant that has tie ins to obsidian for note taking and various IoT/WoT devices around my home so that I can hopefully have an audio interface to take notes, control my home, etc.

    • @codetothemoon
      @codetothemoon  Рік тому +1

      nice, that sounds like a fun project! definitely report back on how it goes if you can!

    • @TheDuD52
      @TheDuD52 Рік тому

      @@codetothemoon of course! progress is relatively slow as I work on other projects and slowly fumble my way through developing it. As of now I just have the vosk model setup in rust to do voice to text conversion. working on using the cpal crate to get audio directly from a mic. Once I can get input from the mic, translate it to text, then I can start looking into an LLM to feed that the text into as input.

  • @avgvstvs96
    @avgvstvs96 Рік тому +1

    links missing from description. pls dont hurt my feeling like that again

    • @codetothemoon
      @codetothemoon  Рік тому

      💔 oh no! I forgot - what link did I promise? lmk and I'll add it :)

    • @avgvstvs96
      @avgvstvs96 Рік тому

      @@codetothemoon haha looks like I was hoping for the link to gpt neo on huggingface. no worries though, i found it 😁

  • @BluffAlice
    @BluffAlice 2 роки тому +2

    Fantastic video. Is there any advantage of using this rust method compared to a direct python module?

    • @TheVonWeasel
      @TheVonWeasel 2 роки тому +6

      you get to use rust instead of python. I can think of no greater benefit you could possibly want

    • @codetothemoon
      @codetothemoon  2 роки тому +3

      Thanks Mian - yeah I don't think there is much performance benefit as the inference is being done by the same low level code whether you're invoking it via Python or Rust. But yeah, some might see the Rust language itself as a huge advantage 😎

  • @TrustifierTubes
    @TrustifierTubes 2 роки тому +2

    I am going to see if this can be used to build domain-specific Q/A system that can answer about rules that apply for a situation. :-) I'll let you know how it goes

    • @codetothemoon
      @codetothemoon  2 роки тому +1

      That sounds like a really interesting use case, can't wait to hear about the results!

  • @principleshipcoleoid8095
    @principleshipcoleoid8095 2 роки тому +2

    Huh I wonder if rust can work with stable diffusion consideting rust has transformers port

    • @codetothemoon
      @codetothemoon  2 роки тому +1

      Good question, I haven't played with stable diffusion yet - it sure looks incredible. Would be cool to be able to use it in a Rust stack

  • @fish1r1
    @fish1r1 2 роки тому +1

    can anyone tell me what is the .. operator doing at 6:14?
    edit: found it, it's "update struct syntax"

    • @codetothemoon
      @codetothemoon  2 роки тому

      correct! It definitely comes in handy sometimes!

  • @Jure1234567
    @Jure1234567 2 роки тому +1

    I wonder if this thing can generate code. Can you publish this to your github repo?

    • @codetothemoon
      @codetothemoon  2 роки тому +4

      good question, I haven't tried that! For some reason I neglected to commit the code for this one, here it is github.com/Me163/youtube/tree/main/bert_test

  • @AbuAl7sn1
    @AbuAl7sn1 2 роки тому +1

    I'm curious about the way that you r using to edit your videos

    • @codetothemoon
      @codetothemoon  2 роки тому

      I used to edit them myself, I've used a few fantastic editors for the last 6 or 7, so I probably won't be much help here :(

  • @Jianju69
    @Jianju69 2 роки тому

    Can this model be exported to ONNX?

  • @fenixdota1116
    @fenixdota1116 Рік тому +1

    is there any github code of this sample? please

    • @codetothemoon
      @codetothemoon  Рік тому

      github.com/Me163/youtube/tree/main/bert_test

  • @irlshrek
    @irlshrek 2 роки тому +1

    dude...so sick

  • @peterferguson6996
    @peterferguson6996 Рік тому

    This is a really interesting technology. I am definitely going to mess with it a little.

  • @lupinthird
    @lupinthird 2 роки тому +1

    I went to set this up on Ubuntu 22.04, and everything was fine until I went to execute "cargo run". Seems like there's a libcurl related problem. I have libcurl installed, but it seems to be complaining about inflate/deflate issues, so I checked to make sure I had zlib installed too, and I did. Anybody else run into this issue?

    • @CarlosWong54
      @CarlosWong54 2 роки тому

      I didn't have a problem using wget

    • @lincolnwallace17
      @lincolnwallace17 Рік тому +1

      I had some problems on Ubuntu 22.04 but it was because I tried to specify the libtorch path manually (like he showed in the begin)
      But I instaled libtorch using:
      sudo apt istall -y libtorch-dev libtorch-test libtorch1.8
      So, theoretically, the installation location already is in my PATH, so the OS knows where to search for.
      So when I just don't specify the libtorth installation location manually, it works just fine.

  • @principleshipcoleoid8095
    @principleshipcoleoid8095 2 роки тому +1

    How much RAM does it usually need?

    • @codetothemoon
      @codetothemoon  2 роки тому

      Good question, I wasn't actually watching my resource usage when I was trying it out. My guess is that it would use at least as much memory as the model occupies on disk, so at least 10GB.

  • @fullmaster9333
    @fullmaster9333 2 роки тому +1

    What's the font in your terminal?
    I like it 👍

    • @codetothemoon
      @codetothemoon  2 роки тому +1

      Thanks, I actually wasn't sure what I was using. I just looked it up an it's Monaco!

    • @fullmaster9333
      @fullmaster9333 2 роки тому

      I'm glad you answered, cause I wasn't able to find it by myself. The best match was Osaka Mono, but it's not the same.

  • @1879heikkisorsa
    @1879heikkisorsa 2 роки тому +1

    What about hard rules? Let's say you have a data contract for cell phones and you offer 5 GB of included data every month and unused capacity will be lost. How can we enforce the model to obey this hard constraint?

    • @codetothemoon
      @codetothemoon  2 роки тому

      I think the more advanced models should have no problem adhering to these hard rules, but I'm not sure about GPT-Neo 27B. This is the sort of rule I would explain in the text generation prefix. I would expect it to adhere to it the vast majority of the time, but I'm not sure about 100% of the time. Maybe with the right tuning parameters!

  • @andydataguy
    @andydataguy Рік тому +3

    I'm working on an OpenAI content generator. The process your described could be great for building synthetic training data on the cheap. Could then use Fluvio for Rust based data pipelines. The day of Rust AI adoption is quickly approaching 🤗

  • @lamachinga
    @lamachinga 2 роки тому +1

    Worked, thx

    • @codetothemoon
      @codetothemoon  2 роки тому

      Nice Mar!

    • @principleshipcoleoid8095
      @principleshipcoleoid8095 2 роки тому

      How much RAM did it spend? My attempt spent almost all - 7/8G RAM and generated 15 virtual RAM and then crashed :(
      Is it me doing something wrong with the local resources or does it really need 23G+ RAM?

  • @camotsuchuoi
    @camotsuchuoi Рік тому

    thank you :)

  • @livingworld3462
    @livingworld3462 Рік тому +1

    tu es muito bom

  • @alphabetapapa
    @alphabetapapa Рік тому

    It will be

  • @realsemig
    @realsemig Рік тому +1

    Damn you type fast

  • @jcww
    @jcww 2 роки тому +1

    i think i'll win next cc, thank you

  • @dragonmax2000
    @dragonmax2000 2 роки тому +1

    That is it. I'm done thinking in this world. ;) Onto another universe. Oh, wait, no AI there yet. Hm, maybe I'll stick around for a bit... ;) here.

    • @codetothemoon
      @codetothemoon  2 роки тому +2

      I agree, dimensions that have AI are vastly preferable to those that don't

  • @AtRiskMedia
    @AtRiskMedia 2 роки тому +1

    correction: i'm making a multi-billion $ start-up with this =P (appreciating the encouragement) 😀

  • @Space8K
    @Space8K 2 роки тому +1

    This gentleman sounds exactly like zuk 🤣

    • @codetothemoon
      @codetothemoon  2 роки тому +2

      Hah! Thankfully I don't see running a $300B company anywhere in my immediate future...

  • @kamalkamals
    @kamalkamals 2 роки тому +1

    switch from emac to vscode hh :)

    • @codetothemoon
      @codetothemoon  2 роки тому

      So far I use vscode for all my videos, primarily because I get the sense that it's what most people use. When prototyping I usually use Helix or neovim

    • @kamalkamals
      @kamalkamals 2 роки тому

      @@codetothemoon sure thing and i like ur content, keep up

  • @bradstudio
    @bradstudio Рік тому

    150GB+ RAM!??

  • @qm3ster
    @qm3ster Рік тому +2

    Muh dude, please no more
    loop {
    let mut line = String::new();
    std::io::stdin().read_line(&mut line).unwrap();
    either do (ergonomic)
    for line in std::io::stdin().lock().lines().map(Result::unwrap) {
    or (efficient)
    let mut line = String::new();
    let stdin = std::io::stdin().lock();
    loop {
    line.clear();
    stdin.read_line(&mut line).unwrap();
    Creating the String inside the loop means it will allocate every time (once it begins being written to, and potentially more than once per line for long lines)
    Calling `std::io::stdin()` always checks lazy initialization
    Reading from a `Stdin` instead of an `StdinLock` locks a mutex, even if next line is already in the BufReader's buffer!
    in this video the performance impact is absolutely dwarfed by running the model, but this kind of REP loop is something you do in a lot of your videos, so switching to either of the other approaches would make sense.

  • @jenreiss3107
    @jenreiss3107 Рік тому

    nix-shell -p libtorch-bin ;)

  • @duckner
    @duckner 2 роки тому +1

    "SaaS service"

    • @codetothemoon
      @codetothemoon  2 роки тому +1

      In retrospect, "SaaS product" probably would have been less redundant

    • @duckner
      @duckner 2 роки тому

      @@codetothemoon yeah lol, nice tutorial either way

  • @swoopertr
    @swoopertr 2 роки тому +1

    keep doing good work.

  • @Geeraf999
    @Geeraf999 2 роки тому +10

    Suggestion: You could edit the sound of your keyboard when sped up into something pleasant.

    • @codetothemoon
      @codetothemoon  2 роки тому +2

      Hah, yeah that makes sense I'll figure something out!

    • @ozkavoshdjalla
      @ozkavoshdjalla 2 роки тому +18

      @@codetothemoon Pls no! It's very pleasant! I love it

    • @andydataguy
      @andydataguy Рік тому +5

      I hope it doesn't change. I love the sped up keyboard sounds!! It's so satisfying

    • @marcomarek7734
      @marcomarek7734 Рік тому

      I never understood why some people like keyboard sounds. For me personally, the quieter the better. Let alone loud and sped up lol...

    • @johnyepthomi892
      @johnyepthomi892 Рік тому +1

      Park it, Shinde

  • @GAGONMYCOREY
    @GAGONMYCOREY 2 роки тому +1

    I'm going to use it to scam grandmas out of their hard earned bitcoins.

    • @codetothemoon
      @codetothemoon  2 роки тому +1

      (1) I'm not sure how easy it'll be to find grandmas with bitcoins (2) I'm sure you can think of something better

  • @NerveClasp
    @NerveClasp 2 роки тому +1

    next video: praise to .eth

    • @codetothemoon
      @codetothemoon  2 роки тому

      Are we talking about the top level domain .eth?

  • @c0zn1c
    @c0zn1c Рік тому

    Could this be used to type out quick and dirty Rust code? Like a cross platform mobile app.