- 230
- 622 017
Prompt Engineer
India
Приєднався 15 кві 2023
Join the AI, AGI, ASI Revolution !
Stay updated with all the latest advancements in the world of Artificial Intelligence. This channel is your ultimate destination for the most recent developments and insights, all at your fingertips.
Join us to keep a vigilant eye on the cutting-edge of technology and be part of the AI revolution!
Topics: Latest Trends on AI, LLMs, OpenAI, Anthroic, Google, Coding, Python, AutoGen, MemGPT, AutoGPT, API Integration, RunPoDs, Devika, Salad GPU, Microsoft Azure and much more.
Stay updated with all the latest advancements in the world of Artificial Intelligence. This channel is your ultimate destination for the most recent developments and insights, all at your fingertips.
Join us to keep a vigilant eye on the cutting-edge of technology and be part of the AI revolution!
Topics: Latest Trends on AI, LLMs, OpenAI, Anthroic, Google, Coding, Python, AutoGen, MemGPT, AutoGPT, API Integration, RunPoDs, Devika, Salad GPU, Microsoft Azure and much more.
Huggingface opens doors for Ollama with this new Integration
In this video we are going to explore the different hugging phase libraries and models in the GGUF format and how we can use that inside of ollama. Now Ollama is a great service using which you can run models in your local system but there are so many LLMs that has not been explored in the ollama model page. Now with this integration of GGUF format in hugging face itself which supports ollama integration, you will be able to explore thousands of different models it’s pretty amazing.
-------------------------------------------------------------------------------------------------------------
Learn More:
Try Out Gloud GPUs on Novita AI : fas.st/t/EvuzAkeX
-------------------------------------------------------------------------------------------------------------
Links:
Blog Post: huggingface.co/docs/hub/en/ollama
Local App Settings: huggingface.co/settings/local-apps
Ollama Template Information: github.com/ollama/ollama/blob/main/docs/template.md
Ollama Library: ollama.com/library?sort=newest
#AI #Ollama #GGUF #huggingface #localappsettting #Ollamaandhuggingface #huggingfacewithOllama #GGUFFormat #LLMs
CHANNEL LINKS:
🕵️♀️ Join my Patreon for keeping up with the updates: www.patreon.com/PromptEngineer975
☕ Buy me a coffee: ko-fi.com/promptengineer
📞 Get on a Call with me at $125 Calendly: calendly.com/prompt-engineer48/call
💀 GitHub Profile: github.com/PromptEngineer48
🔖 Twitter Profile: prompt48
Other videos that you would love:
ua-cam.com/video/WNYV8rk6wJw/v-deo.html
ua-cam.com/video/IZfgbOgeXOA/v-deo.html
ua-cam.com/video/88jbPOmBOaU/v-deo.html
ua-cam.com/video/9UrWEUIiZ5c/v-deo.html
ua-cam.com/video/lhQ8ixnYO2Y/v-deo.html
ua-cam.com/video/QTv3DQ1tY6I/v-deo.html
ua-cam.com/video/gcMdzGrDLlw/v-deo.html
ua-cam.com/video/GKr5URJvNDQ/v-deo.html
-------------------------------------------------------------------------------------------------------------
Learn More:
Try Out Gloud GPUs on Novita AI : fas.st/t/EvuzAkeX
-------------------------------------------------------------------------------------------------------------
Links:
Blog Post: huggingface.co/docs/hub/en/ollama
Local App Settings: huggingface.co/settings/local-apps
Ollama Template Information: github.com/ollama/ollama/blob/main/docs/template.md
Ollama Library: ollama.com/library?sort=newest
#AI #Ollama #GGUF #huggingface #localappsettting #Ollamaandhuggingface #huggingfacewithOllama #GGUFFormat #LLMs
CHANNEL LINKS:
🕵️♀️ Join my Patreon for keeping up with the updates: www.patreon.com/PromptEngineer975
☕ Buy me a coffee: ko-fi.com/promptengineer
📞 Get on a Call with me at $125 Calendly: calendly.com/prompt-engineer48/call
💀 GitHub Profile: github.com/PromptEngineer48
🔖 Twitter Profile: prompt48
Other videos that you would love:
ua-cam.com/video/WNYV8rk6wJw/v-deo.html
ua-cam.com/video/IZfgbOgeXOA/v-deo.html
ua-cam.com/video/88jbPOmBOaU/v-deo.html
ua-cam.com/video/9UrWEUIiZ5c/v-deo.html
ua-cam.com/video/lhQ8ixnYO2Y/v-deo.html
ua-cam.com/video/QTv3DQ1tY6I/v-deo.html
ua-cam.com/video/gcMdzGrDLlw/v-deo.html
ua-cam.com/video/GKr5URJvNDQ/v-deo.html
Переглядів: 1 579
Відео
This AI can Create Music Perfectly Synced to Videos ! #MuVi
Переглядів 5969 годин тому
Create seemless Music for your Videos. MuVi’s groundbreaking technology is set to redefine audio-visual content creation, enhancing immersion and cohesion between music and visuals. Welcome to the demonstration of MuVi, an innovative framework for generating music that seamlessly aligns with video content. Through a combination of visual feature extraction and advanced music generation, MuVi pr...
The Future of Multimodal AI | Open-Source Mixture-of-Experts Model #aria
Переглядів 3999 годин тому
In this video, we explore ARIA, a revolutionary open-source multimodal AI model by Rhymes AI. ARIA seamlessly integrates text, images, video, and code inputs, outperforming leading AI models like GPT-4 and Pixtral-12B on various benchmarks. Learn how this cutting-edge Mixture-of-Experts model works, its four-stage training process, and its outstanding performance in tasks like long video unders...
New Mistral Models are too Good: Ministral 3B and 8B | Quality Testing on Virtual GPUs
Переглядів 43314 годин тому
In this video, we are going to test out Ministral 3B and Ministral 8B models: the best Edge models from Mistral AI. Learn More: TRY Out Ministral on Gloud GPUs on Novita AI : fas.st/t/EvuzAkeX On the first anniversary of Mistral 7B, Mistral AI proudly launches Ministral 3B and Ministral 8B-two groundbreaking models designed for on-device computing and edge AI use cases. With 128k context length...
All in One LLM Hosting ⚡Solution free up your Time | Deploy your Apps easily
Переглядів 30019 годин тому
In this video, we’ll cover everything you need to know about hosting your LLMs. First, we'll explore how to use Novita's APIs to access their hosted models. Next, we'll dive into deploying GPU instances, showing you how to get started quickly using ready-made templates. Finally, we’ll look at the serverless option, where you pay as you go, allowing you to scale efficiently and reduce costs. Lea...
How to Get your LLMs to OBEY | Easiest Fine-tuning Interface for Total Control over your LLMs
Переглядів 633День тому
In this video, we will explore how to easily fine-tune LLaMA-3.2-1B-Instruct using a simple dataset to make it respond according to our preferences. We will utilize LLaMA Factory, which simplifies the fine-tuning process, all within a Gradio GUI Interface-it's truly amazing! Compared to ChatGLM's P-Tuning, LLaMA Factory's LoRA tuning offers up to 3.7 times faster training speeds with improved R...
OpenAI's SWARM is the Ultimate Multi-agent Framework | Run using Local LLMs or OpenAI API Keys
Переглядів 2,3 тис.День тому
Dive into the exciting world of multi-agent AI systems with OpenAI's experimental Swarm framework! 🤖🚀 In this video, we'll walk you through: ✅ What Swarm is and why it matters ✅ Key features and concepts (Agents, handoffs, functions) ✅ How to set up and use Swarm ✅ Real-world examples and use cases ✅ Tips for building your own multi-agent systems The OpenAI Cookbook presents a comprehensive gui...
Smart AI Flight Recommendation Systems | Full Stack Code
Переглядів 366День тому
This video explains how to create a flight recommendation system using Firefunction-v2, SerpApi, FastAPI, and Next.js. This system is designed to take a user's travel preferences and provide tailored flight suggestions Building the System This tutorial uses the following technologies: ● FastAPI - A high-performance framework for building APIs in Python. ● Next.js - A React Framework. ● Tailwind...
The AI Framework That Thinks and Acts Like a Human | Agent S
Переглядів 2 тис.День тому
In this video, we explore Agent S, an innovative framework designed to make computers as intuitive to use as a human. Agent S employs cutting-edge Experience-Augmented Hierarchical Planning, utilizing real-time web knowledge and narrative memory to break down complex tasks into manageable subtasks. With capabilities like Retrieval-Augmented Generation (RAG), memory planning, and multi-step task...
Palmyra Tool Calling Ability EXPOSED! Better than OpenAI
Переглядів 411День тому
In this video, we dive deep into the powerful new capabilities of Palmyra X 004, Writer's latest generative AI model. Learn how this cutting-edge LLM can reshape enterprise workflows by automating complex tasks, interacting with external tools, and simplifying decision-making across departments. From product development to financial analysis, discover how Palmyra X 004 brings efficiency, scalab...
🚀Revolutionary NotebookLM : Found an Open source Alternative 💓
Переглядів 1,3 тис.14 днів тому
Discover NotebookLM, Google's cutting-edge AI research assistant, powered by the Gemini 1.5 Pro model! Whether you're a student, researcher, or content creator, NotebookLM is designed to help you manage and analyze your projects with ease. Simply upload your documents, and it instantly becomes an expert, providing personalized insights, grounded with in-line citations. Your data stays private a...
AI wins the Nobel Prize in Physics 2024
Переглядів 19214 днів тому
AI wins the Nobel Prize in Physics 2024 #ai #airocks #aieverywhere #novel #alfrednovel #davidshapiro
Stop Paying for Web Crawlers (Use this Instead)
Переглядів 2,5 тис.14 днів тому
🚀 Crawl4AI: Open-Source Asynchronous Web Crawler for LLMs 🕸️ | Demo & Features In this video, we introduce Crawl4AI, a powerful open-source web crawler built for asynchronous web scraping and AI applications. Whether you're a data scientist, developer, or AI enthusiast, Crawl4AI simplifies large-scale web scraping and structured data extraction using advanced techniques like CSS selectors, Java...
The Weird Connection Between Reward Models and Better Decision Making
Переглядів 28014 днів тому
🚀 AI's Quantum Leap: Reward Models Revolutionize Machine Intelligence! 🧠 Dive into the cutting-edge world of AI with us as we explore the game-changing breakthrough in reward models. Discover how Llama 3.1 Nematron 70 B Reward is reshaping the landscape of artificial intelligence: - 94.1% overall accuracy on Reward bench - 98.1% in reasoning tasks - Real-world applications in finance, education...
Blazingly FAST Image Generation using FLUX 1.1 (Pro)
Переглядів 44714 днів тому
🔥 FLUX1.1 [pro] is here! Black Forest Labs has unveiled FLUX1.1 [pro], their latest image generation model that offers 6x faster speeds and improved image quality. The model surpasses its predecessor, FLUX1 [pro], and has topped the charts on Artificial Analysis with the highest Elo score. Alongside this, the company is releasing the beta version of the BFL API, enabling developers to harness t...
Liquid Foundation Models better Than LLMs | Breakthrough AI Foundation Models
Переглядів 1,9 тис.14 днів тому
Liquid Foundation Models better Than LLMs | Breakthrough AI Foundation Models
CREATE Your Own Dataset Like a Pro in 30 mins
Переглядів 68621 день тому
CREATE Your Own Dataset Like a Pro in 30 mins
Is UNSLOTH the Secret to Making Llama 3.2 the Best AI Model? (Video 3 of 4) Fine-Tuning your LLMs
Переглядів 2,3 тис.21 день тому
Is UNSLOTH the Secret to Making Llama 3.2 the Best AI Model? (Video 3 of 4) Fine-Tuning your LLMs
Fine-Tuning and Deploying for Your Use Case: Ollama and Hugging Face (Video 2 of 4)
Переглядів 73121 день тому
Fine-Tuning and Deploying for Your Use Case: Ollama and Hugging Face (Video 2 of 4)
Fine-Tuning and Deploying for Your Use Case: Meta's Llama 3.2 Explained (Video 1 of 4)
Переглядів 1,3 тис.21 день тому
Fine-Tuning and Deploying for Your Use Case: Meta's Llama 3.2 Explained (Video 1 of 4)
FREE Fine Tune AI Models with Unsloth + Ollama in 5 Steps!
Переглядів 4,8 тис.Місяць тому
FREE Fine Tune AI Models with Unsloth Ollama in 5 Steps!
SOTA LLM for Measuring Hallucinations in LLMs| #bespoke-minicheck
Переглядів 267Місяць тому
SOTA LLM for Measuring Hallucinations in LLMs| #bespoke-minicheck
Pixtral 12B - the first-ever multimodal Mistral model FINALLY | Wow Mistral AI
Переглядів 885Місяць тому
Pixtral 12B - the first-ever multimodal Mistral model FINALLY | Wow Mistral AI
Talk to the In-Game NPCs in Natural Language | Mind-Blowing
Переглядів 379Місяць тому
Talk to the In-Game NPCs in Natural Language | Mind-Blowing
Spatial Intelligence is one Step Ahead of LLMs | World Labs
Переглядів 516Місяць тому
Spatial Intelligence is one Step Ahead of LLMs | World Labs
AI creates open-world Video Games from Texts
Переглядів 710Місяць тому
AI creates open-world Video Games from Texts
Search Through Your Local Images for Free using NVIDIA ChatRTX
Переглядів 357Місяць тому
Search Through Your Local Images for Free using NVIDIA ChatRTX
Is it the end of Software Engineering? | OpenAI o1 preview
Переглядів 675Місяць тому
Is it the end of Software Engineering? | OpenAI o1 preview
FREE Local RAG System with NVIDIA ChatRTX
Переглядів 769Місяць тому
FREE Local RAG System with NVIDIA ChatRTX
Hi how can we add our own dataset?
ua-cam.com/video/MQis5kQ99mw/v-deo.html Here u go
@@PromptEngineer48 i have created my own dataset and upload it to huggingface but i want to use it on "gradio live" but i couldn't find it how can i add my dataset there?
When I tried to push the model to the name space I got this error "you are not authorised to push to this name space, create the model under a namespace you own" but namespace iam using is mine, what I do?
But will the speed be reduced drastically. Eg: If I have four instances will the response time for each instance get increased by 4x to complete compared to one instance running for the dlsame model
If all act on the same time then speed reduction.. otherwise one at a time. We are just removing the loading the model time.
Pretty cartoonish music for the video games IMO.
Error code: 404 - {'error': {'message': 'The model `gpt-4o-2024-08-06` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}} where do I set the model name ?
wow, awful demo, it sounds terrible, if you think this sounds good or compelling you need your ears checked or you are hopelessly delusional about the current state of the technology or your invested in it trying to hype up a company with a shit product...
Does it support a multi-part or split model?
This tech will devastate the silent film industry 😭
Yes.
hi great video, i have question can we use different format like this one for fine tune or just the one you mentioned in the video {"role": "user", "content": "I love playing Minecraft!", "emotion": "Excited", "topic": "General Chat", "tone": "Enthusiastic"}, {"role": "assistant", "content": "Me too! It's the best game ever!", "emotion": "Excited", "topic": "General Chat", "tone": "Enthusiastic"},
Anything is possible. this will work as well.
thanks
You're welcome!
What about GUI?
So can I use this to make an agent agent that can create and delete agents as needed? I want to make it find, create, optimize, or even buy upgrades to itself. Mad science means never stopping to ask what could possibly go wrong. 🤷♂️
whhooaaaa
Has anybody experienced that function calling with llama is not that accurate as compared to open AI?
big bro class Swarm: def __init__(self, client=None): if not client: client = OpenAI(base_url="127.0.0.1:11434/", api_key="random") self.client = OpenAI | any = client im configure core.py this but all time run.py say me PS C:\swarm-main> & C://AppData/Local/Programs/Python/Python310/python.exe c:/swarm-main/examples/triage_agent/run.py Traceback (most recent call last): File "c:\swarm-main\examples\triage_agent un.py", line 5, in <module> run_demo_loop(triage_agent) File "AppData\Local\Programs\Python\Python310\lib\site-packages\swarm epl epl.py", line 63, in run_demo_loop client = Swarm() File "AppData\Local\Programs\Python\Python310\lib\site-packages\swarm\core.py", line 29, in __init__ client = OpenAI() File "\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\_client.py", line 105, in __init__ raise OpenAIError( openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable what can ı do some times?
Thank you!!! 🎉can’t wait for some more
Mistral AI ...lentement, furtivement mais sûrement
Thanks for the lecture.Suppose I want to see the tokenizer and detokenization in my llama model.How to go internal detail of model using ollama.can you suggest
Hey man, great video. This seems very promising. Let'skeep an eye on this 👍
yes.
This is great and I followed all your instructions, but I can't export the fine-tuned LLM to Hugging Face, I tried several different tokens, I even created a token that had full read/write to everything on my Hugging Face account and it still errors on export. Are you able to export a model to your Hugging Face account or do you also receive an error? If I can't retrieve the fine-tuned LLM then this is only good for academic purposes. Look forward to your reply, I really enjoy your content and am considering joining your Patreon.
Thanks for trying out. First you need to go to the model page on Huggingface and get the rights. For example for llama 3.2 hugging face, when u go for the first time, you will see options for registration with LLAMA for the first time .
@@PromptEngineer48 thank you for the quick reply, I have done that! I actually fully went through your video and fine tuned the Llama 3.2 1B Instruct model retrieved from my Hugging Face account! It took over an hour to do the training but it worked! I got the same prompt result after training you did where it identified itself as a "Llama factory" made model! But before I go and use my own datasets I wanted to test export but it errors out after a few seconds after starting the export!
@@PromptEngineer48 I now have it working! What I had to do was stop the Llama Board process on the Colab page and then update my Hugging Face key by re-running the command I had to put into the Colab page manually as you alluded to in your video: from huggingface_hub import login # Replace 'your token' with your acutal Hugging Face acess token login("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") And then after re-running the above command in the colab page (running on the "Python 3 Google Compute Engine backend" I restarted the Llama Board process and then was able to export successfully!! Awesome!! We trained an LLM using Lora!!! Thank you!
Thank you for putting the feedback and success story. We are really inspired. I will bring more such contents. Now back to techno hunt
You didn't tried yet?
This marks the beginning of the end of white collar work.
🥺
And the end of an old way and a new beginnings and pathway shall emerge from this! new possibilities. new opportunities. There's always two sidess to a coin. 👍
@@Corteum ok, great that your so optimistic. However, what exactly do you propose the human civilization do once the machines are self controlled, can do most tasks better then humans and are vastly more intelligent then most humans at most tasks?
No code .. it didn't happen 😂
It didnt happen, or you didnt research? 😂
Too bad I couldn't reproduce your system, I must have missed a step. I also couldn't find your page.tsx in the description. I'll try again because it's a big and beautiful work that you've provided. Thanks.
the github link is broken. here is the uploaded page.tsx file. I hope this help. rest backend seems okay.. github.com/PromptEngineer48/Flight_Recommendation/blob/main/page.tsx Please tell me about any other issues. you can come on github as well.
Thank you but where is your page.tsx relating to your flight recommendation system ? The one I created personally is causing me problems.
github.com/PromptEngineer48/Flight_Recommendation/blob/main/page.tsx this is the page.tsx
@@PromptEngineer48 THANKS.
why do you call it the ultimate multi-agent framework? Nothing in the video demonstrated it was the ultimate. Currently it looks less developed than crewai.
i mean just look at the potential. An agent calling another agent to continue the conversation.
Crewai is the worst
LM Studio is the easiest way. Unlike Ollama, all their default models are censored.
Awesome.Thanks.
You are welcome.
how is this approach different from RAG, can you elaborate please?
It is RAG
@@PromptEngineer48 but i'm confused you replied to the message just below for his question which assumes it is not RAG?
I am sorry for any confusion. It is a RAG system. any use input of pdfs etc will be used to reply to the user's questions.
but why thought? As an Agent that supposedly has everything it needs through API´s, its not needed in my opinion.
Hmm.
Because there are some things you might want to do with desktop running software, local files that your want help with
This is so cool right?
Right.
Hey, You are right on point regarding AI and its innovations I appreciate it! and Thanks for the content.
My pleasure!
Sometimes, you find free alternatives. Other times, free alternatives find YOU... 😎 ua-cam.com/video/FtYNTP23ysg/v-deo.html
It is a great news isn't it? But most media outlet out here just keep asking him about AI doom etc. Really annoying.
Hi! Thank you for the video. So once we get this working how do we create an api and use it for external applications?
Fastapi
Could you please share the podcast?
github.com/PromptEngineer48/NotebookLMTest Here you go, please download the rar file and unzit it. You will find the mp3 file
is it possible to pass html/markdown directly to crawl4Ai instead of using built it crawl feature? how?
Can you make a video to crawl the entire crunchbase? Like, literally, it seems the best way to do that. Thanks.
Crunchbase is a database. Learn to pay for things
@@dinoscheidt who the fuck are you telling people what to do? If you're that rich you're already pay people to do what you want, so shut the fuck up, you're watching "Stop Paying for Web Crawlers" video. Idiots.
crawlbase.com/blog/how-to-scrape-crunchbase/ this looks close
Nice content Dude !!!!!!!!!!!
Thanks so much.
I just want to say thankyou so very much for taking the time to give us this information in English. Your outstanding ability to communicate the very cutting edge of this new emergent field is going to have an impact not only on ai development but our own has well. Using this reward model will become the baseline for many application.
Thank You for the amazing words.. Keep in touch
no weights it carries no weight!
that's insane for real this is the space to watch for AI growth imho.
Can it makes various hairstyles? From my experience it doesnt understands various hair or beard styles. While my competitors all create amazing hairstyles for their sites. I dont get it how the best image generator cant do it and which one they use....and which prompts maybe
Indeed
LFM trained from chat GPT outputs... ZzzZzzzZzzzz
Well. If you would not just parrot their marketing materials and test it, youll see it is not there, yet
true
tested and it's all garbage.
Yes. For now. But they are able to fit in smaller devices. The model is not yet instruct-tuned. Let's wait and see.
The heckkkkkkkkk 😂
Just no... Stop wasting bandwidth
Why put a heart on this comment. I block bad channels like this.
@@procactus9109 shut up you moron go watch some soy content
It's probably automatic with an extension
Great direction 😊
Ask what model it is. It seems it is a model badly trained from gpt 3.5 generated synthetic data. Nothing to see here.
hey do share if u have any finetuning script for this model
Thanks for the video. It gave me some more insights. But you still refer to tools that run online. Can you also explain how to use and run these tools locally? And with locally I mean 'download and install everything you need and then disconnect Ethernet'-locally. Thanks in advance