Use AutoGen with ANY Open-Source Model! (RunPod + TextGen WebUI)

Поділитися
Вставка
  • Опубліковано 16 вер 2024
  • I might be obsessed with AutoGen...
    In this video, I show you how to use AutoGen powered by TextGen WebUI and RunPod, which means you can use literally any open-source large language model with it, even Falcon 180b or Code LLaMA.
    Enjoy :)
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewber...
    Need AI Consulting? ✅
    forwardfuture.ai/
    Rent a GPU (MassedCompute) 🚀
    bit.ly/matthew...
    USE CODE "MatthewBerman" for 50% discount
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    Media/Sponsorship Inquiries 📈
    bit.ly/44TC45V
    Links:
    Use RunPod - bit.ly/3OtbnQx
    AutoGen Beginner Tutorial - • AutoGen Tutorial 🚀 Cre...
    AutoGen Intermediate Tutorial - • AutoGen FULL Tutorial ...
    AutoGen Fully Local - • How To Use AutoGen Wit...
    AutoGen - microsoft.gith...
    LMStudio - lmstudio.ai/
    RunPod TextGen WebUI Template - bit.ly/3EqiQdl
    Install TextGen Locally - • How To Install TextGen...
    RunPod Full Tutorial - • Run ANY LLM Using Clou...

КОМЕНТАРІ • 194

  • @matthew_berman
    @matthew_berman  11 місяців тому +205

    Should I make a video testing different open-source models to see which one powers AutoGen best?

    • @scitechtalktv9742
      @scitechtalktv9742 11 місяців тому +9

      I think that would be of great utility !

    • @1-chaz-1
      @1-chaz-1 11 місяців тому +2

      Yes!

    • @Jimmy_Sandwiches
      @Jimmy_Sandwiches 11 місяців тому +4

      That would be a tremendous help Matt - I was actually thinking of the ability to have specific agents using different models due to type and complexity of their roles - (one role could just have access to a business data model to keep things tight in certain areas) would really be a massive bespoke powerhouse

    • @stickmanland
      @stickmanland 11 місяців тому

      Definitilily! 😉

    • @Techonsapevole
      @Techonsapevole 11 місяців тому +2

      i'd like a real useful usecase of autogen

  • @simkjels
    @simkjels 11 місяців тому +55

    I work at a law firm, and I have set up AutoGen group chat to simulate a legal team to solve tasks. The team gather legal information and argue legal matters between agents to come up with a multiple scenarios that a virtual judge finally rate each of the suggested solutions. I tried it on previous exams from law school and compared AutoGen's output to the exam evaluation, and it is staggering how well it performs.

    • @jesserigon
      @jesserigon 11 місяців тому +8

      is this a public repo? have you posted it to the examples chat in autogen discord? would be awesome to see.

    • @JustMaier
      @JustMaier 11 місяців тому +5

      I’d also like to see this. I want more example of people using autogen

    • @howlingdakota
      @howlingdakota 11 місяців тому +4

      This is a great idea, would love to see it!

    • @neoblackcyptron
      @neoblackcyptron 10 місяців тому +1

      you should productize this and sell it to other law firms. Don't give out the code for free to freeloaders.

    • @jesserigon
      @jesserigon 10 місяців тому +2

      @@neoblackcyptron Im sure someone will regardless of this guy does, but to regard the opensource community as freeloaders is wierd. its the backbone of the entire Internet age imo. he's would need to use use like 1500 different foss projects (if you've ever seen a dependency tree) to productize his work.

  • @marcfruchtman9473
    @marcfruchtman9473 11 місяців тому +10

    Your ability to parse these install instructions and organize them into a video that we can actually follow is amazing.
    Thank you for making these videos!

  • @IvanGabriele
    @IvanGabriele 11 місяців тому +8

    Thank you so much for the shout-out Matthew 😊! Amazing video and well-explained tutorial as usually! As I told you in private, even as a software engineer, you were the first one I watched and you helped me learn so much during my first steps into the AI & LLM world. Hopefully we'll have more amazing discoveries to share 😉.

  • @koliux1
    @koliux1 11 місяців тому +14

    things that I think are a must for Autogen to take off are :
    1) how well if at all it can push to github
    2) iterate on the github
    3) Embeddings and Vector DB like Supabase
    to store all prompts so it does not deviate too much from the development of the coding project :/ ( but maybe I missed that part )

  • @Q9i
    @Q9i 11 місяців тому +15

    This will be INSANE! Can't wait to see what all people make from this.

  • @luisortega3090
    @luisortega3090 10 місяців тому +4

    Is anyone else continuously getting 502 gateway errors when they finish configuring the pod in the web UI? I've tried it on two different machines while using both Mistral 7b and Dolphin Mistral 7b

  • @OriginalRaveParty
    @OriginalRaveParty 11 місяців тому +8

    I never quite figured out how to get multiple agents set up in VS Code, running mistral 7b locally with autogen. I configured assistant as name "Coder", and then assistant2 as name "Checker" and tried to get Coder to pass all his work to Checker to verify his work, but instead it all came back to me as User Proxy. Would be great to see a 5 agent example, like a little dev team with a CEO, concept designer, user interface guy, coder and code checker or something similar 👍

    • @luisortega3090
      @luisortega3090 11 місяців тому +1

      I believe the name of the assistant object and the assigned name have to be the same

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh 11 місяців тому +2

    2:23 It is not completely uncensored, however an effort was made and with proper instructions you can mostly avoid censorship that it still tries to do to it's output. This was a cencored model which fine tuning efforts were made to reverse the censorship, it was not 100% successful but it was a good effort and it is substantially more useful.

  • @xb3sox_
    @xb3sox_ 11 місяців тому +3

    Thx Matthew for this incredible work, I tried it before many times with many models and Mistral was the best and light option, i faced one issue with context length limit and i hope they have a good technique to solve it

  • @frankismartinez
    @frankismartinez 11 місяців тому +1

    Worked sweet on my older Mac M1; was able to create a POC for a healthcare project… immediate industry value

  • @JonathanPohlner
    @JonathanPohlner 11 місяців тому +3

    keep up the good work! loving the autogen series! a wizard an an assistant and a completer walk into a bar

  • @JonathanStory
    @JonathanStory 11 місяців тому +4

    Really confused about actual pricing for running on RunPod. The posted prices ($/hr) don't mean anything to me because I'm clueless about how much cpu time would be used in the real world. Is it likely to be multiples of Chat-GPT4's $20/mo? If you spend a day coding with mistral, what does that set you back?

    • @DanielSCowser
      @DanielSCowser 11 місяців тому

      Following

    • @stereotyp9991
      @stereotyp9991 11 місяців тому +2

      Hi! The 20 Bucks per month for ChatGPT and the OPEN AI API are two different things. If you want to use the OPEN-AI API for your autogen you have to pay for every token regardless of whether you are paying for ChatGPT or not. peace

  • @GoldenDragonFromHills
    @GoldenDragonFromHills 11 місяців тому +6

    Waiting for the advanced code generation tutorial by autogen

    • @weeeBloom
      @weeeBloom 11 місяців тому

      yeah, me too!!

  • @kigas24
    @kigas24 11 місяців тому +4

    I've discovered autogen + langchain can work with Excel sheets. Autogen can read the columns and calculate financial ratios (I use it for finance). Really looking forward to the advanced autogen video.

    • @candogruyol
      @candogruyol 11 місяців тому +3

      Hi there! Is there any chance you can share this? I'm trying to do the same thing!

    • @ludoviclebleu
      @ludoviclebleu 11 місяців тому +1

      Please share that (bis).
      Thx in advance!

    • @kigas24
      @kigas24 11 місяців тому

      @@ludoviclebleu search "Using Langchain with Autogen", video by DLExplorers is the video I followed. Change the Excel file + autogen prompts for your use case.

    • @kigas24
      @kigas24 11 місяців тому

      @@candogruyol search "Using Langchain with Autogen", video by DLExplorers is the video I followed. Change the Excel file + autogen prompts for your use case.

    • @DanielSCowser
      @DanielSCowser 11 місяців тому

      Yes please!

  • @VidarBrekke
    @VidarBrekke 11 місяців тому +4

    A few questions:
    1 - My scripts always fail because they generate more than the 8K token limit. is there a way to avoid this from happening? Can c-tags or other method be implemented?
    2 - Will Autogen work with existing (large) codebases (I have a Django project i'm working on), if so how?

  • @OpenAITutor
    @OpenAITutor 11 місяців тому +2

    Just got it to work locally on my Windows, box. Thank you for the video. Um, suggestion to folks make sure you tell the bot in your system message the OS you are using. It likes to default to Linux. :) TextGen WebUI is a beast. The lm studio is too new.

  • @jwia007
    @jwia007 11 місяців тому +3

    Your videos are amazing!

  • @fuba44
    @fuba44 11 місяців тому +2

    Runpod also offer like a "LLM as a service", where you pay as you go. You think you could cover that in a video sometime?

  • @mikewhite6561
    @mikewhite6561 11 місяців тому +1

    Can't wait for the Autogen advanced tutorial!

  • @sfco1299
    @sfco1299 11 місяців тому +2

    Is there a reason to use text gen webui instead of LM Studio for a local execution scenario?

  • @dudedkdk
    @dudedkdk 11 місяців тому +4

    Hi Matthew great stuff can you maybe make a video on GDPR and data governance if you are using auto gen? Is it safe to use if you have sensitive data

  • @samlak7102
    @samlak7102 11 місяців тому +1

    We would love to see your projects, they must be interesting!

  • @CognitiveComputations
    @CognitiveComputations 11 місяців тому +2

    maybe i should train dolphin on falcon-180b too

  • @ReLogic888
    @ReLogic888 11 місяців тому +1

    Damn, why i just found your channel now ?
    however, you do a good job with this channel. Very detail, practical, and easy explanation to follow. 👍👍

  • @austinpatrick1871
    @austinpatrick1871 10 місяців тому

    A helpful usecase I’ve found was with finding ongoing clinical trials that a particular patient could be a good candidate for.
    This implementation was technically with autogpt (not done it with autogen yet)

  • @mitchross2852
    @mitchross2852 11 місяців тому +1

    Do a real world example deploying llms in k8s to simulated production enterprise when developers can connect to the LLM in cluster

  • @consig1iere294
    @consig1iere294 11 місяців тому +4

    In your previous video you mentioned LLM Studio where GPU could be used for GGUFs. How can one use GPUs for GGUFs? Thanks!

    • @easolutionsllc
      @easolutionsllc 11 місяців тому

      I'm reading now you should be using GPTQs for running on VRAM (GPU)

  • @bitcode_
    @bitcode_ 11 місяців тому +1

    i used autogen with openai's api key and it ran my usage to $8 in less than 10 minutes

  • @Martin-kr5nx
    @Martin-kr5nx 11 місяців тому +1

    Can you show us how to run agents that are a mix of OpenAI as well as open source?

  • @heliosobsidian
    @heliosobsidian 11 місяців тому +1

    Hi Matthew! thanks for shaing! may I know where can check the AutoGen advance Tutorial? is it in the substack? please let me know :) have a nice day!!!🤗

  • @joeyda3rd
    @joeyda3rd 11 місяців тому +1

    Build an app. Your personal project would be great. Need to see how configure the agents. Code llama please!

    • @PigOnPCIn4K
      @PigOnPCIn4K 10 місяців тому

      I just installed code llama on textgen ui but whenever I try running it with Transformers model loader I get lots of traceback errors, show stopped for me and Code Llama :(

  • @MrBowmanMakes
    @MrBowmanMakes 10 місяців тому

    you're on fire! I've learned so much about autogen from you and really appreciate your clear and focussed tutorials. Thanks Matthew!

  • @achille_king
    @achille_king 11 місяців тому +2

    Thank you for great work and video again! I was wondering about possibility of combine aider with autogen. For example if developer agent could use aider when the prompt was given from proxy agent?

  • @cristian15154
    @cristian15154 11 місяців тому +1

    looks great, thanks.
    but, wow if i have to do it from scratch for local use, it's kind of complicated, cause you will bump with many issues...

  • @workchannel6518
    @workchannel6518 10 місяців тому +4

    I followed the tutorial (several times, actually) and still cannot get port 5001 ready. This is needed to emulate OpenAI API. I added 5001 to the "Expose HTTP Ports (Max 10)" field in the RunPod configuration (also tried editing the pod later too), followed the instructions in the video carefully and always get "HTTP Service [Port 5001] Not Ready" in the Connection Options tab of the Connect dialog. HELP!

    • @CookWithShar
      @CookWithShar 10 місяців тому

      same here! Was seeing if anybody had this issue recently

    • @davidyoung5074
      @davidyoung5074 9 місяців тому

      its no longer port 5001. 5000 is the new api port

  • @leosoulas5897
    @leosoulas5897 11 місяців тому +1

    is it possible to have agents from different LLMs talking to each other through Autogen. For example Mistral with Openai?

  • @TheAIAndy
    @TheAIAndy 11 місяців тому

    looking forward to the real usecase video thanks again Matthew!

  • @AncientSlugThrower
    @AncientSlugThrower 11 місяців тому +1

    textgen webui is janky as hell in it's presentation, but it is still my favorite interface for trying new models because it is so bleeding edge. Great video.

  • @isaiahthompkins6523
    @isaiahthompkins6523 11 місяців тому +1

    Anyone know the price differential on requests of this approach vs OpenAi?
    I can only imagine it’s much cheaper.

  • @OpenAITutor
    @OpenAITutor 11 місяців тому +1

    Kind of sad that local LLM with autogen are not really ready for primetime. I hope they get better. At the moment, we can barely even create toy projects as demonstrated with GPT-4.

  • @weeeBloom
    @weeeBloom 11 місяців тому +2

    Thanks for your tutorials, are amazing. Please, I would really like to see more Autogen tutorials, the best use cases!!!

  • @phillip_jacobs
    @phillip_jacobs 10 місяців тому

    Super keen for that Advanced AG Tutorial!

  • @marc1190
    @marc1190 11 місяців тому +1

    Great content, concise and gets to the point

  • @J3R3MI6
    @J3R3MI6 11 місяців тому +1

    Dude yes.. thanks Matt 🙏🏽💎

  • @BlayneOliver
    @BlayneOliver 11 місяців тому +1

    Would you consider doing a video about Darkweb trained LLM's? I know of DarkBERT so far, but none others... I'd love to see that level on uncensored available

  • @LAVolAndy
    @LAVolAndy 11 місяців тому +1

    What are the differences between RunPod and LMStudio? Why did you go away from LMStudio?

  • @peterm9893
    @peterm9893 11 місяців тому +2

    it has missing functions though ... Fastchat does better in terms of api, but has issues with cpu offloading

  • @omountassir
    @omountassir 10 місяців тому

    We would love to see your personal project on AutoGen! Mine is about : Leveraging AutoGen to craft an AI-driven intelligent solution for optimized pharmaceutical inventory management. I think its certainly too ambitious, I would love your advice.

  • @Almsoo7
    @Almsoo7 11 місяців тому +1

    How do I get my GPU to work instead of CPU. It is taking very long to run my code as I notice from task manager only my CPU is working while the GPU is idle.

  • @snuwan
    @snuwan 10 місяців тому +3

    I do not know if there is a change in the template or something but I followed this video and another video exactly but could not get a the port 5001 to be working. Then I talked to Runpod discord and they have asked me to add the following environment variable to the pod
    environment variable called UI_ARGS to your pod with a value of --extensions openai --api-port 5001.
    Then it is working. Hopefully it will help those who might face with the same issue.

    • @jakeburton8374
      @jakeburton8374 9 місяців тому

      where do you add that environment variable?

    • @napszemuvegesteknos
      @napszemuvegesteknos 9 місяців тому

      not all heroes wear capes, thank you!

    • @xugefu
      @xugefu 7 місяців тому

      under expose http ports, there is env var setting.

  • @MatichekYoutube
    @MatichekYoutube 10 місяців тому

    Only thing missing is adding "langchain" of llamaindex to talk to database (csv, pdfs etc)

  • @Mr.Laffin
    @Mr.Laffin 11 місяців тому +1

    You should make a video about visual Copilot

  • @stevenbaert1974
    @stevenbaert1974 11 місяців тому +3

    Why use RunPod whereas you demonstrated LMStudio locally can do this free of charge? Also Mistral 7B can easily run locally which makes that you can run AutoGen endlessly and free? Am I missing something here? What's different besides Run Pod with your previous video? I was hoping to see LMStudio with Mistral 7B and AutoGen locally working on code but I miss the point of this video where LMStudio is replaced by Run Pod (which is paid by the hour). What's the message, why should we pay for Run Pod/do that?

    • @ludoviclebleu
      @ludoviclebleu 11 місяців тому +2

      LMstudio has requirements that not all of us have locally :)

    • @ddwinhzy
      @ddwinhzy 11 місяців тому +1

      There are still a lot of local computers that are not well equipped

  • @saintsscholars8231
    @saintsscholars8231 11 місяців тому +1

    I picked up a second hand MBP Mid 2010 Corr i7.
    Will this run Auto Gen locally?

  • @ozsibe
    @ozsibe 10 місяців тому

    or real world use case, i personally need to categorize my product categories to google product category structure. would love to be able to setup that on my PC.

  • @samsontan1141
    @samsontan1141 9 місяців тому +1

    Been following your instruction for so long. Do we know what is wrong if the port 5001 is never ready?

  • @RadeckWolanin
    @RadeckWolanin 11 місяців тому +3

    Hey Matt, unfortunately Dolphin model quickly hits max context window of 2048 tokens. I've tried few different ones (Mistral-7b) but with various success. Let us know which model works the best and thanks for great content!

    • @matthew_berman
      @matthew_berman  11 місяців тому

      MemGPT might be a good solution to the content window issues. I’m making a video about it today.

    • @isitanos
      @isitanos 10 місяців тому

      Looks like Autogen is already adding support for memgpt-enabled agents. I guess the limitation now is finding open source models that support function calling correctly. There are a couple up on Huggingface that claim to do so.

  • @berkesimus
    @berkesimus 11 місяців тому

    These are great! Thanks for showing us the way mate

  • @itlackey1920
    @itlackey1920 11 місяців тому

    Very cool, thanks for sharing! I am going to be trying this asap 🎉

  • @neokortexproductions3311
    @neokortexproductions3311 11 місяців тому

    Thanks for the info,subscribed too.
    that opening shot transition was sick, where is that from?

  • @forcanadaru
    @forcanadaru 11 місяців тому

    You are absolutely outstanding, Mat!

  • @attilavass6935
    @attilavass6935 11 місяців тому +1

    Can Autogen be used with LLMs from Huggingface? Like in Langchain...

  • @wwpin
    @wwpin 11 місяців тому

    Man you seem younger every day it pass, keep your great job. LOVE

  • @stickmanland
    @stickmanland 11 місяців тому +1

    Three cheers for AutoGen!!

  • @RemessOfficial
    @RemessOfficial 11 місяців тому

    this series is great keep it up! thank you!

  • @kobyshoshan
    @kobyshoshan 10 місяців тому

    I also see the same problem of non-exposed port 5001 at RUNPOD pods. Maybe they block it. Tried a lot of time the instructions and always get port not ready.

  • @skunkpaste2
    @skunkpaste2 11 місяців тому +1

    can you start mentioning associated costs with the videos you do please? i.e. runpod cost me $xx.xx to run through this demo

  • @takasurazeem
    @takasurazeem 7 місяців тому

    Getting this error. While testing mistral model.
    Error occurred while processing message: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

  • @ademrahal3389
    @ademrahal3389 11 місяців тому +1

    Hey great video. I actually am starting to work with autogen, and local LLM's. But there is a big issue for using tools while doing so, do you have any solution for this? My first thought rn is to pass the tools as arguments and instead of running a simple generate on the models, I run an agent chain with langchain but it's not a clean solution.

  • @henrychien9177
    @henrychien9177 11 місяців тому +1

    So we dont need an api anymore? I mean like we can run without an api? Or just use the api generate for this platform?

    • @matthew_berman
      @matthew_berman  11 місяців тому

      You don’t need to pay for ChatGPT anymore, either way you need an API.

  • @dflaggvt
    @dflaggvt 10 місяців тому +1

    It seems like the template is failing to run openai on port 5001

  • @kel78v2
    @kel78v2 5 місяців тому

    Could you do more videos on autogen studio?

  • @KorathWright
    @KorathWright 10 місяців тому

    Autogen advanced video please!

  • @nufh
    @nufh 11 місяців тому +1

    Hi, I just started to venture into the AI world about almost 2 weeks now. And I am very fascinated by the llm, ChatGPT etc. Still struggle to learn Python and to understand what Docker is. Do you have any suggestion where should I start?

    • @raed3620
      @raed3620 10 місяців тому

      Ask chatgpt.

  • @ddwinhzy
    @ddwinhzy 10 місяців тому

    I got to know runpod because of AutoGen and used him.

  • @huanvo6799
    @huanvo6799 10 місяців тому

    How come you did not have to put the `llm_config` in the `UserProxyAgent`? Which llm is it using by default?

  • @geofftsjy
    @geofftsjy 10 місяців тому

    Can you show how to do this with RunPod Serverless endpoints?
    Also, is there a way to secure the endpoint and set your own API token?

  • @gauravtewari233
    @gauravtewari233 8 місяців тому

    Great Video again. I am having a small issue with current setup, I tried all mentioned on video and text web UI is working fine but whenever I'm trying to connect it via API it's giving no response while autogen is working fine between agents but response is 'None'.

  • @anthanh1921
    @anthanh1921 10 місяців тому

    I got limit context window size error (~2k) is there any setting of the TextGen UI can overcome this?

  • @paolojoya3847
    @paolojoya3847 7 місяців тому

    Hello, would anyone understand why I can't start the service? I am new to this and I can't find a way to solve it, I need to use port 5001
    "it your service runnig? check your togs or read the README"

  • @dhrumil5977
    @dhrumil5977 11 місяців тому +3

    Yehahaha you heard me lol but using runpod isn't free lol 😔

    • @ryzikx
      @ryzikx 11 місяців тому

      rip

    • @ryzikx
      @ryzikx 11 місяців тому

      but it does cost some money to use the biggest gpus

    • @dhrumil5977
      @dhrumil5977 11 місяців тому

      @@ryzikx how about using petals with autogen

  • @moneyclip8772
    @moneyclip8772 10 місяців тому

    So, could we do this with petals too?

  • @RequiemAcapella
    @RequiemAcapella 11 місяців тому

    Ayy, just in time! AutoGen Hyype!

  • @mengli7441
    @mengli7441 9 місяців тому

    Is it possible to use Autogen with open source models that are hosted on AWS EC2 instances?

  • @rastinder
    @rastinder 11 місяців тому +1

    Do we need cuda drivers?

  • @truliapro7112
    @truliapro7112 10 місяців тому

    @matthew_berman - Can we use AWS SageMaker foundation Models wit AutoGen?

  • @michaelberg7201
    @michaelberg7201 10 місяців тому

    AutoGen is a really interesting project especially when you don't have to pay the OpenAI fees. But the relatively small context windows for LLM's (all of them really) is frankly a show stopper for using AutoGpt. I work on a project which consists of more than 2000 java source files and I don't see any way to use AutoGPT to develop or iterate on projects of this size.

    • @threepe0
      @threepe0 10 місяців тому

      MemGPT solves that issue.

  • @mohamedbadawy7473
    @mohamedbadawy7473 10 місяців тому

    Can the TextGen WebUi run with the multiple autogen agents?

  • @BunniesAI
    @BunniesAI 11 місяців тому

    This is awesome 😍

  • @mdfarhananis8950
    @mdfarhananis8950 11 місяців тому

    This is really good

  • @spinettp
    @spinettp 11 місяців тому

    Thankyou so much.

  • @justindressler5992
    @justindressler5992 11 місяців тому

    The a100 and 4090 are twice as fast as the a6000 for inference

  • @w.balazs6424
    @w.balazs6424 10 місяців тому

    So do I understand correctly, with this method I don't have to pay for the chatGPT API like at all?

  • @dewijones92
    @dewijones92 11 місяців тому

    more please

  • @antigravityinc
    @antigravityinc 8 місяців тому

    Where’s the “Advanced Tutorial” for autogen? You’ve mentioned that it’s coming for a while now, but I’m not sure it exists? One of the main reasons i follow your channel. Thanks!

  • @peterm9893
    @peterm9893 11 місяців тому

    that's what I been doing for almost 3 weeks now :D

  • @maxeduai
    @maxeduai 11 місяців тому

    I cant install autogen or import in Linux or Windows, im so sad..

  • @EaglEyesAI
    @EaglEyesAI 11 місяців тому +1

    Didn't you just expose the runpod publicly with no API key protection?

  • @rollingmaster7708
    @rollingmaster7708 10 місяців тому

    How do you get internet access?