Polymir
Polymir
  • 11
  • 11 058
Using Local LLMs for Object Detection!
Use your local LLM for object detection in images!
Join the Discord: discord.com/invite/4DyMcRbK4G
Library Used:
github.com/emirsahin1/llm-axe
Example Code:
github.com/emirsahin1/llm-axe/blob/main/examples/ex_object_detector.py
How to setup your own LLM with llm-axe:
github.com/emirsahin1/llm-axe/blob/main/examples/ex_llm_setup.py
Ollama:
ollama.com/library/llava
Переглядів: 515

Відео

Give Internet Access to your Local LLM (Python, Ollama, Local LLM)
Переглядів 4 тис.3 місяці тому
Give your local LLM model internet access using Python. The LLM will be able to use the internet to find information relevant to the user's questions. Join the Discord: discord.com/invite/4DyMcRbK4G Library Used: github.com/emirsahin1/llm-axe
How to Make a Local Function Calling LLM (ollama, local llm)
Переглядів 1,9 тис.3 місяці тому
Make a completely local function calling LLM using a local or external model of your choice. Function callers are LLM agents that pick the best function to call for a given prompt. Join the Discord: discord.com/invite/4DyMcRbK4G Library Used: github.com/emirsahin1/llm-axe Code used in the video: github.com/emirsahin1/llm-axe/blob/main/examples/ex_function_caller.py
Hue Lynx: Desktop Controller for Lifx Lights | Screen Color Match + Music Match
Переглядів 1136 місяців тому
Showcase of Hue Lynx, an unofficial desktop controller app for Lifx Brand Lights. Developed by me. It allows you to match the color of your screen to the lights. It also features music matching which will try to match the lights to the beat of your computer's audio. You can download it at the GitHub page below: github.com/emirsahin1/HueLynx
VR Arch Viz - Dynamic Materials - UE4
Переглядів 120Рік тому
Dynamic materials for Arch Viz in VR. Created in UE4. Looking to hire a VR Developer? Contact me at: www.lumenviz.com/ Instagram: polymir_
YoloV7 Weapon Detection
Переглядів 439Рік тому
github.com/emirsahin1/YoloV7-Weapon-Detection-Model
Istanbul Street Night Loop - 3D Animation
Переглядів 1322 роки тому
A nighttime 3D visualization of a generic street in Istanbul. Created in Blender, rendered with Eevee (viewport rendering). Feel free to use it for any non-commercial use cases. Contact me at polymir11@gmail.com for commercial use cases. Attributions - Window model by SusanKing. Re-textured by me. creativecommons.org/licenses/... Fuzebox textures by CrazyScans. creativecommons.org/licenses/... ...
Istanbul Street Night Loop - 3D Animation (With Music)
Переглядів 4792 роки тому
A nighttime 3D visualization of a generic street in Istanbul. A clean looped version without music can be found at ua-cam.com/video/XK4flmeQNzc/v-deo.html Created in Blender, rendered with Eevee (viewport rendering). Feel free to use it for any non-commercial use cases. Contact me at polymir11@gmail.com for commercial use cases. Attributions - Window model by SusanKing. Re-textured by me. creat...

КОМЕНТАРІ

  • @MrOnePieceRuffy
    @MrOnePieceRuffy 12 днів тому

    hello my friend, thank you for the helpful video! i learned what i was actually looking for. Can you do me a favour and open up your OBS and click on the 3 dot button under your Mic/Aux and then you choose Filters. There you press + (Plus) - Button in the left corner and choose "Noise Supression" and create one of this in its default settings. Thank you very much <3

  • @voltexripper8367
    @voltexripper8367 18 днів тому

    very good video bro

  • @jumpersfilmedinvr
    @jumpersfilmedinvr 29 днів тому

    how much more advanced can we create a local llm to be than the censored versions available publically?

    • @polymir9053
      @polymir9053 27 днів тому

      I think alot of the functionalities of platforms like chatgpt are quite easily replicated with function calling and agents. The hard part usually is being able to locally run a large enough llm that can reliably follow the system prompts . You can only do so much with wrapper code, eventually it all boils down to how good the LLM your using is. If your GPU poor like me, I'd recommend looking into Groq cloud, they have quite generous amounts of free API access to a lot of different llm models.

    • @jumpersfilmedinvr
      @jumpersfilmedinvr 27 днів тому

      @@polymir9053 Yeah Dude that might work out. There's got to be tons of ways to creatively use cloud space. I would imagine how many agents people can link together in unfathomable networks together. Open source and jail broken

  • @Test18025
    @Test18025 Місяць тому

    Have you measured how many FPS? And on what machine you are running llava:7b?

    • @polymir9053
      @polymir9053 Місяць тому

      I have not measured it but I would say this is not usable for real time purposes. Your not likely to get anything even close to 1 fps using this unless you have a really good setup for running LLMs. I'm running it on a 1080ti.

    • @Test18025
      @Test18025 Місяць тому

      @@polymir9053 right, thanks for your insight

  • @themax2go
    @themax2go Місяць тому

    "Join the Discord" => "invalid invite" 😒

    • @polymir9053
      @polymir9053 Місяць тому

      Sorry about that, try this: discord.com/invite/4DyMcRbK4G

  • @Arunak13203
    @Arunak13203 Місяць тому

    Thank you for sharing valuable content. Keep post more similar content.

  • @dr.mikeybee
    @dr.mikeybee 2 місяці тому

    You're coding the same time I'm watching -- after midnight.

  • @_areck_
    @_areck_ 2 місяці тому

    amazing video, extremely underrated channel. Good work, I needed this to complete my program for an assistant model using ollama that has the capability to create files, run files, edit the contents of files, search the web, and maintain a persistent memory. This was the second to last thing I needed to finish it up, now I just need to finish the run files part.

  • @mikew2883
    @mikew2883 2 місяці тому

    Great stuff! 👍

  • @edengate1
    @edengate1 2 місяці тому

    Is it better to open various tabs from the same LLM in case i want to ask different subjects like we do in ChatGPT? Or i can use only one chat for everything i want to do?

    • @polymir9053
      @polymir9053 2 місяці тому

      You can use a single Agent for multiple subjects. While agents do keep track of history, chat history is only used if passed in along with the question.

  • @brenden_Li
    @brenden_Li 2 місяці тому

    what app executing the code with

    • @polymir9053
      @polymir9053 2 місяці тому

      It's just Python and Ollama.

  • @enderfun2852
    @enderfun2852 2 місяці тому

    Just a quick question: llm-axe is just a client for accessing all the AIs, right? So, if I want to use it, I first need to start a server with the actual model and only then this code will work? Sorry, I'm really dumb, possibly I misunderstood the point of this API EDIT: I conducted some tests with LM Studio and it turns out it's that simple. I'm currently using Transformers and would prefer to stick to it. Is there a way to just make llm-axe and Transformers play together? Would love to see a video on this

    • @polymir9053
      @polymir9053 2 місяці тому

      Basically, llm-axe is a toolkit that can be used with pretty much any llm. It will not host a model itself, so yes, you need to have a model up and running to use llm-axe. If you'd like to use something other than ollama for your llm setup, please see this example on how that can be done: github.com/emirsahin1/llm-axe/blob/main/examples/ex_llm_setup.py

  • @fakhrun4038
    @fakhrun4038 2 місяці тому

    Can you make a video for the web ui?

    • @polymir9053
      @polymir9053 2 місяці тому

      There is no webui for this, but you with some coding you could easily tie this up to any existing open source chat UIs.

  • @maths_physique_informatiqu2925
    @maths_physique_informatiqu2925 3 місяці тому

    is there a solution for prompts that are not related to the functions, because it shows error , actually I need a model that answer to the user prompt either it's in need of call function or not ?

    • @polymir9053
      @polymir9053 3 місяці тому

      I'm not sure if I understand your question. But you could always default to a normal agent if the function caller fails to answer. Feel free to join the discord in the description if you want to chat about this.

  • @paleostressmanagement
    @paleostressmanagement 3 місяці тому

    Can this be used with the Ollama API? If so, how?

    • @polymir9053
      @polymir9053 3 місяці тому

      Yes, I'm using Ollama in the video. It has built in support for the Ollama API through the OllamaChat class. See this example: github.com/emirsahin1/llm-axe/blob/main/examples/ex_online_agent.py

    • @paleostressmanagement
      @paleostressmanagement 3 місяці тому

      @@polymir9053 Thanks! But i am still a bit confused as to how to use this with the ollama API example for a chat completion? curl localhost:11434/api/chat -d '{ "model": "llama3", "messages": [ { "role": "user", "content": "why is the sky blue?" } ], "stream": false }'

  • @MartinBlaha
    @MartinBlaha 3 місяці тому

    Thank you - well explained. I'm trying to detect doors and windows and then estimate their sizes. Currently by fine-tuning yola for segmentation.

    • @polymir9053
      @polymir9053 3 місяці тому

      Thanks! Yes, Yolo's definetly the right tool for that. But if you ever need to detect whether or not a window/door is open for example, I could see llms being useful for that.

  • @madhudson1
    @madhudson1 3 місяці тому

    that's ok for very simple object detection

    • @polymir9053
      @polymir9053 3 місяці тому

      Its simple for now, but as open-source vision models get better, we can expect more reliable and complete results. Chatgpt vision for example already seems more capable than most zero-shot detectors.

  • @ViralComparison
    @ViralComparison 3 місяці тому

    perfect! need more videos like these

  • @polymir9053
    @polymir9053 3 місяці тому

    CODE IN EXAMPLES FOLDER: github.com/emirsahin1/llm-axe

  • @irkedoff
    @irkedoff 3 місяці тому

    💜

  • @Oxxygen_io
    @Oxxygen_io 3 місяці тому

    This was great, thanks for the introduction. Love to see a deep dive to have this add on as an extra where you get command prompt with internet search. Like running "ollama but now with internet"

    • @polymir9053
      @polymir9053 3 місяці тому

      Thanks! I appreciate the suggestion. There is a small little demo showing how this can be used to make a command prompt chat where you can chat with the online agent. Here is the link: github.com/emirsahin1/llm-axe/blob/main/examples/ex_online_chat_demo.py

  • @OffsecNinja
    @OffsecNinja 3 місяці тому

    Thank you! I've been searching for a solution, and everything online is either irrelevant or consists of 300+ lines of code that are hard to understand. This library makes things much easier. Thanks for sharing! :)

    • @polymir9053
      @polymir9053 3 місяці тому

      Glad to hear that you found it useful!

  • @vbderrico
    @vbderrico Рік тому

    Which setup are you using? Great job btw

    • @polymir9053
      @polymir9053 Рік тому

      Thanks! Its a custom component I made. It stores a preset list of materials and then assigns the parent mesh the current material when clicked. I can make a tutorial if there is enough interest.

  • @MobBeatzMusicProduction
    @MobBeatzMusicProduction Рік тому

    Omg 🤯

  • @Geniy_B_Kvadrate_XD_ua
    @Geniy_B_Kvadrate_XD_ua Рік тому

    It is definitely real😂

  • @velkatodorova2347
    @velkatodorova2347 Рік тому

    It,s not rial

  • @metegumus9829
    @metegumus9829 Рік тому

    The fakest shish I've ever seen

  • @IsqoBeats
    @IsqoBeats 2 роки тому

    Guzel calisma

  • @sagejpc1175
    @sagejpc1175 2 роки тому

    What is the voice saying?

    • @polymir9053
      @polymir9053 2 роки тому

      Its not anything worth translating, just a funny voiceline from a video :).

  • @pyromaster9095
    @pyromaster9095 2 роки тому

    pretty cool^^