- 326
- 207 278
Kamalraj M M
India
Приєднався 20 лют 2023
Machines with processors that can crunch 250+ GFlops are accessible to everyone who has access to a mail account, and a browser. I believe that making these machines work for you will be best thing to work on at this moment. I enable you to do that.
From the Other Side...
Insight Builder
From the Other Side...
Insight Builder
Debugging Calculator App with Windsurf
Description.
Part 5
Show the debugging and code editing process of Windsurf
PS: Got a question or have a feedback on my content. Get in touch
By leaving a Comment in the video
Want to Consult? Feel free to book a short call with me.
cal.com/insight-kamalraj
Buy me a Coffee
ko-fi.com/insightbuilder
@mail insighthacker21@gmail.com
@github github.com/Kamalabot
Part 5
Show the debugging and code editing process of Windsurf
PS: Got a question or have a feedback on my content. Get in touch
By leaving a Comment in the video
Want to Consult? Feel free to book a short call with me.
cal.com/insight-kamalraj
Buy me a Coffee
ko-fi.com/insightbuilder
@mail insighthacker21@gmail.com
@github github.com/Kamalabot
Переглядів: 10
Відео
Enhancing the Calculator Features with WindSurf
Переглядів 62 години тому
Description. Part 4 The Geometric features are added to the calculator, and the features are tested. PS: Got a question or have a feedback on my content. Get in touch By leaving a Comment in the video Want to Consult? Feel free to book a short call with me. cal.com/insight-kamalraj Buy me a Coffee ko-fi.com/insightbuilder @mail insighthacker21@gmail.com @github github.com/Kamalabot
Buiding the Calculator app
Переглядів 112 години тому
Description. Part 3 : Building the calculator app with the Windsurf AI PS: Got a question or have a feedback on my content. Get in touch By leaving a Comment in the video Want to Consult? Feel free to book a short call with me. cal.com/insight-kamalraj Buy me a Coffee ko-fi.com/insightbuilder @mail insighthacker21@gmail.com @github github.com/Kamalabot
Updating the Features File with Additional functions
Переглядів 72 години тому
Description. Part 2 Updating the features_calculator.md file to update additional functions. Want to Consult? Feel free to book a short call with me. cal.com/insight-kamalraj Buy me a Coffee ko-fi.com/insightbuilder @mail insighthacker21@gmail.com @github github.com/Kamalabot
Creating Features Document for Calculator App with Windsurf
Переглядів 192 години тому
Description. Short video showing casing the documentation creation with windsurf Want to Consult? Feel free to book a short call with me. cal.com/insight-kamalraj Buy me a Coffee ko-fi.com/insightbuilder @mail insighthacker21@gmail.com @github github.com/Kamalabot
Implementing LLM Agents To Do CRUD Ops on SQL DB|How to use Function Calling to Query SQL DB
Переглядів 65Місяць тому
Take a look at the below medium articles medium.com/@kamaljp/agents-routines-hand-offs-how-to-build-them-intuitively-a6f27d32bb64 medium.com/@kamaljp/perform-function-calling-with-ollama-on-local-machine-a38922cf5525?source=your_stories_page What you will be able achieve after this post If you had dreamt of machines that can talk, do all your work and keep you informed about your next opportuni...
Agents Routines & Hand offs, How to build them Intuitively | Its All About Function Calling
Переглядів 51Місяць тому
Description. OpenAI open sourced Swarm, a python library for building Agents using GPT models. Along with that, they were kind enough to share the cook book on Orchestrating Agents, Routines & Handoff. Underlying to all this is "Function Calling" and the ability of the models to structure the output into JSON. Read in more details about Function calling with OpenAI and Ollama models medium.com/...
Perform Function Calling with Ollama on Local Machine | How Llam3.2 Model is Loaded in Ollama
Переглядів 142Місяць тому
Description. 1) Get comfortable with the Ollama installation, serving model & CLI options 2) Dive into background of Ollama model serving process 3) Understand how Ollama client works & ideas of making Python Apps 4) Visualise how the model is extracting Function name & arguments This post is extension of the earlier post of function calling with OpenAI's 4o-mini model. We will be using the sam...
Perform Function Calling with OpenAI 4O-Mini with Python: Do CRUD On File System In English
Переглядів 57Місяць тому
Description: Function calling is the bridge that makes it possible for Functions Written in your Code to be called using natural language like English. What challenges you can solve Visualise how the function calling can be used for automation You can leverage natural language / voice for manipulating the digital data in your laptop Dive into the process of translating natural language to funct...
DEMO of Performing CRUD Operation On Your Local Machine With LLM
Переглядів 63Місяць тому
Description. Showing How LLM is manipulating stuff on your Computer There will be an explorer on the right and LLM terminal on the left. 1) convo.txt will be created first 2) contents of the file will be shown in notepad 3) convo.txt will be updated 4) contents of the file with updated data will be shown 5) convo.txt will be deleted Lets Begin Code at github.com/insightbuilder/codeai_fusion/tre...
Loading & Embedding Documents In SurrealDB Using Kalosm Crate in Rust
Переглядів 92Місяць тому
Description. The code for supporting notebook is present in the below git repo - github.com/Kamalabot/cratesploring/tree/main/floneum_explorer/doc-table-divein - github.com/Kamalabot/cratesploring/tree/main/floneum_explorer Medium Link: medium.com/@kamaljp/loading-embedding-documents-in-surrealdb-using-kalosm-crate-in-rust-248c88794b43 What Challenges you can solve after this Review & Understan...
How to Store Vector Embeddings in Surreal Db using Rust & Candle with Code Walkthrough
Переглядів 86Місяць тому
After generating the embedding of the text from the documents, pictures or audio, it needs to be stored in a database. We will dive into that in this post Generative AI has changed the nature of how we interact with the machines. Surreal Db is one of the examples of such a change. SurrealDB is a multi-model DB that can handle, structured/ unstructured and semi-structured database by design. On ...
How to Load Embedding Models like BERT using Candle Crate in Rust
Переглядів 71Місяць тому
Description. Embedding the human readable sentences is one of the key steps in RAG application. Lets see how to do that in rust Rust has made long strides in to the Neural Networks arena already. Understanding how Candle Crate, Rust native Candle-transformers model will become the need of the hour very soon. Advantage for rust native Transformer model is The compiled binary takes mere seconds t...
How To Load Phi:3 Model in Rust Using Candle Crate On 16GB Xeon Processor in AWS Ec2
Переглядів 44Місяць тому
Description. Loading safe-tensors model data into Rust native Large Language models on CPU & RAM Candle crate in Rust ecosystem has grown as many of the open source models have been ported to Candle-Transformers. In the earlier post we discussed how to load Gemma 2B model on to T4 GPU. In this post we will see how to load the model in CPU, and use Phi-3 for the same. The code for supporting not...
How To Load Gemma 2 Model in Rust Using Candle Crate in Ubuntu with Cuda
Переглядів 48Місяць тому
Description. Dive into Detail steps of Loading and Converting safe-tensors model data into Rust native Large Language models Candle crate in Rust has made it simple to download Large language models, and use it for inferencing with Rust. Kalosm crate is using Candle extensively to provide a unified interface to the LLMs, and it abstracts the complexities of how the raw tensors are loaded. Basic...
Running Llama2 Locally with Rust + Cuda & Kalosm Demo & Code Walkthrough
Переглядів 140Місяць тому
Running Llama2 Locally with Rust Cuda & Kalosm Demo & Code Walkthrough
Developing DSPy Metrics & Debugging DSPy Programs | Debugging with Phoenix Server
Переглядів 1303 місяці тому
Developing DSPy Metrics & Debugging DSPy Programs | Debugging with Phoenix Server
Tracing LLM Inference With Phoenix For DSPy | An Intro To OpenTelemetry & Observability
Переглядів 1843 місяці тому
Tracing LLM Inference With Phoenix For DSPy | An Intro To OpenTelemetry & Observability
Integrating Websockets & Django AI Shop Implementing Websocket Client Server Inside Django Server
Переглядів 1437 місяців тому
Integrating Websockets & Django AI Shop Implementing Websocket Client Server Inside Django Server
Basics of Web socket Server & How it works / Connecting Websockets To AI Shop
Переглядів 747 місяців тому
Basics of Web socket Server & How it works / Connecting Websockets To AI Shop
Connecting Websockets to AI Shop Introducing Websockets & Incorporating Django
Переглядів 1217 місяців тому
Connecting Websockets to AI Shop Introducing Websockets & Incorporating Django
Improving AI Shop Integrate Argilla Feedback Dataset & Add Live Records Using Python & Django Views
Переглядів 797 місяців тому
Improving AI Shop Integrate Argilla Feedback Dataset & Add Live Records Using Python & Django Views
The AI Shop: Build Django & Bootstrap App With LLM Integrated Endpoints: Streaming Response Covered
Переглядів 3477 місяців тому
The AI Shop: Build Django & Bootstrap App With LLM Integrated Endpoints: Streaming Response Covered
Configuring FB Dataset with Metadata & Vector Setting with Real World Data Using Text2SQL Dataset
Переглядів 697 місяців тому
Configuring FB Dataset with Metadata & Vector Setting with Real World Data Using Text2SQL Dataset
How To Build Your Confidence: Building Warmup Projects of Django API & HTML Templates in 35 Mins
Переглядів 287 місяців тому
How To Build Your Confidence: Building Warmup Projects of Django API & HTML Templates in 35 Mins
Can You Tell Me About Argilla Deleting Gen1 & Gen2 Datasets From Argilla Server (Full Script Shared)
Переглядів 267 місяців тому
Can You Tell Me About Argilla Deleting Gen1 & Gen2 Datasets From Argilla Server (Full Script Shared)
Deep Dive Into FB Datasets Templates: Introducing SFT /PPO/ DPO/ Preference Modeling Templates
Переглядів 557 місяців тому
Deep Dive Into FB Datasets Templates: Introducing SFT /PPO/ DPO/ Preference Modeling Templates
Deep Dive Into FeedBack Dataset Templates: NLI /SS /RAG Templates Datasets Handson
Переглядів 467 місяців тому
Deep Dive Into FeedBack Dataset Templates: NLI /SS /RAG Templates Datasets Handson
Deep Dive Into Text2Text Feedback Datasets with QA Translation and Summarisation Templates
Переглядів 537 місяців тому
Deep Dive Into Text2Text Feedback Datasets with QA Translation and Summarisation Templates
Deep Dive Into Datasets For Token Classification in Argilla : Start Annotating For NER NLP Tasks
Переглядів 1427 місяців тому
Deep Dive Into Datasets For Token Classification in Argilla : Start Annotating For NER NLP Tasks
Is there any technique with which we can train the DSPy with our own custom question-answer examples? In video only evaluation part is covered
Hey, can you explain how to create vector embedding of video with contain visuals and voice with towhee AI or anything else?
Thank You Bro For this Video .
Sir I have a first year college capstone project of creating a book recommemdation telegram bot with integarated very small size LLMs as a knowledge base of the bot given user's inputs and predefined queries. Can you plz guide me how can this be done?
@@SahilZen42 I can provide feedback on the code that you have written if you share the repo, or provide some direction of you face some errors... End2End guidance will be difficult.
Great!
FSDP and DDP with pytorch for distributed parallel execution
Able to load quantized model through ollama create model file. In code if i want to use quantized.bin are there any steps.
Is this frame work work for others llm bye changing it's model path
Can you please tell from where we get this frame work code like any document
@@omshankarray400 this code is written with Django library, it's well known backend development library... It uses transformers package to load the model, I have complete playlist on Transformers discussion. Take a look into my playlists
Its good
How is this better than using f-strings and then manage the logic in functions?
Thats the whole point. Jinja templates as separate files and "they are free from logic". You can alter them, copy them and duplicate them.
where to find api_key?
You need to have the Argilla instance running first... Take a look at the started vids.
Need to try this carefully. If I get it run will update here again. But Nvidia drivers wiimore space is needed. At present I m doing on windows CPU version.
Hello sir, It's really great. You have explained from scratch I believe. This is with mistral7b text generator. I have been trying with similar steps that shown in jupyter notebook for 22b model generates code. With transformers using AutoTokenizer, AutoModelCasualLM. It's really big 44gb , 9 shards. Cache, snapshot, symlinks all are same. Later I created pipeline which was throwing error. I WAS just stuck there. Not sure why pipeline was throwing error.
If the bigger model is loading into the memory, then you must be able to see that on your task-manager / nvidia-smi. Check that first. After that, try to directly run inference on the AutoModelForCausalLLM object, and see if it throws error. And share the error stack trace here for better debugging
@@insightbuilder After some finetuning, was able to download initially to cache as you shown. Later loading from local cache folder. But its taking lot of time to generate the output
@@sivajyothichandra then the model is on the RAM, not on the GPU
Since you are only using LLMs with cuda, you can just enable the "cuda" and "langauge" features for faster compile times. It should be closer to 500 dependencies with just those two features
what is happening inside the predict how it is predicting i cant understand
@@sridharreddy5714 inside predict the LLMs api are called and completiton are returned...
Thank you. Are you able to update the video or resources given the new developments
Good content. Thank You!
Exceptional Content & Explanation. Thank You. Honestly, DSPy Documentation is extremely poor & this tutorial surely one of the best DSPy lectures.
Is there is any specific reason for using Rebel?
Thank you amazing tutorials
Hi! Firstly, thanks for sharing this video with the world! I am trying to use UnstructuredPDFLoader, which is taking about 30 minutes just to load 1 single file of 31MB. I am doing so using Google Collab and when tracking CPU usage, it is occasionally peaking at 90% or 100% although most of the time it is in the 50% region. Do you know of any way to load these PDFs at scale?
Depending the level of pictures, and OCR requirements the detectron model (that is working underneath unstructured) will take considerable time. You might have to think of parsing the pdf page by page, and see how it handles.
That's what I thought. Thanks for the reply 🙏
Is this free?
Its open source server that you can use on your own machine. So yep its free in that sense. Video has full information...
Hi Kamalaraj, when using the unstructured library for extracting images and text,tables from the scanned pdf files in windows. I got the error, poppler is not found and in path?. i'm installed poppler in the program files and add the path in the system variables also.. stll im getting the same error... can i install poppler in local or vs code ? please help me in this issue sir...
@@lakshmipanguluri6939 I reviewed the challenge you are facing, and it's most likely you are using windows. stackoverflow.com/questions/53481088/poppler-in-path-for-pdf2image The above SO post provides different solutions for your issue. Try them out and update. The most easiest solution is to move to Ubuntu OS.... WSL creates some weird challenges...
@@insightbuilder Thanks for your response. Will try and update you… still if it is not working then I’ll move to ubuntu os… thank you so much for your valuable advice…🙂
Thank you, the explanation was very clear and simple. I will finish the entire series.
You cannot fine tune the models using DSPy.
If I wanted to use Groq to improve the embeddings and better LLM what would be the change? And wouldn't the vector database be local to be used as RAG?
@@ruidinis75 endpoints will change. The location of vector DB can be local or remote
Very good
what are you running to "start the langchain server"?
can I integrate dspy with qwen72B?
@@prashanthkolaneru3178 You can do that using hugging face endpoints.. I discuss how to use them in the vids
@@insightbuilder thank you
Could you please tell me how can we get source documents in vector store router toolkit?
Hi sir, open ai API key is been shown will not be a problem for you if someone uses it?
@@mohammedsuhail1500 I have changed the key, so it's not an issue. Good observation
Thank you sir it is an excellent playlist
Very detailed explaination. I have gone thru almost 15 videos before this. But none of them were so detailed to the point. Thank you
Great 👌
Can you help me understand, how the examples being chosen out of trainset provided in Botstrapfewshot. Also, I understand that it add rationale in the questions but like how Bootstrapfewshot works. Can you go in the code and try to gather the information, the teacher and student model, it was not clear.
@@AmanIndia-m5l github.com/stanfordnlp/dspy/blob/main/dspy%2Fteleprompt%2Fbootstrap.py The above script implements the bootstrapwitheandomsearch and there is random_search in the same path which implements the BsFsWRs algo. To understand the working, you have to get comfortable with how metrics works, and how bootstrap withfewshots works..try that first.
Can you show the input it goes to LLM's, is it the same being shown in last videos. Well if then it is just re writing the signature in a certain way, not optimizing it or generating new one!
I have been working on DSPy from a month, and truly saying, I uncover that, it just we write the stuff in a certain format LLM better understand, nothing much, can you please comment your views!
@@AmanIndia-m5l DSPy framework's primary objective is to automatically tune the prompts for better reply from the LLM. LLMs give better reply when good examples outputs are provided in the prompts. So DSPy adds these examples to the signature when you compile the Modules. I hope the above makes sense. The compilation, and it's optimization are the keys that you have to dive deep and understand.
Hey man can't we just do all this on a docker container? Rather than getting an ec2 instance?
docker is possible, and there are tutorials provided by matrix team themselves. Much of the configuration is automated there. My intention is provide in depth understanding for any one who might need alternate route.
Could you teach me how the "trace" argument can be used to improve training? All it says on the official document is that trace is turned off during evaluation and optimization, but turned on during compiling and bootstrapping. Can trace be used to monitor the evaluation metric somehow?
@@sesburg During metrics calculation, you can add traces that you want, when you are going to make your own metrics. There are some community examples that touch this idea
@@insightbuilder Thank you. What keyword should I use to find such community examples? Or could you give me some links? Much appreciated!
@@sesburg DSPy is AI framework, so the ecosystem surrounding it will have such examples. Weaviate has here in their cookbook github.com/weaviate/recipes/blob/main/integrations/llm-frameworks/dspy/ Also search for "DSPy receipe" that might provide some links.
Thank you for this video
Thanks! It would be really helpful if the print output cells colours are better visible.
can i add a project using pandas and fastapi to my resume? Is it a good project for resume ?
Hello, I can't find the source code of this in the github link, can you please help me out.
It's a great explanation for ollama thank you sir. But i am getting a ```{"error":"model 'llama2' not found, try pulling it first"}``` even i already pull and run the model in my server.
Hello. Can you tell me how I can persistently store the embedded data for further processing like searching?
Comprehensive, best tutorial of dspy!
Advice for future videos, don't use all caps, it's harder to read.
can you actually put mutlilple questions in a sing tavily search . So perhaps a) I want to find X b) Where is X located c) What does it work on d) What is the USP of X ?
Py is for pytorch. Great series
Hi Kamalraj, I have a question. Whats the point of using Evaluator Function ? Its not going to optimize the prompt rite.. so, its purpose is just to see how model responds to a dataset. Or , is it meant to be used again once the prompts are optimized so that we can verify whether the optimization is good or not ?