@@nicholastroyandersen9505it’s…. Not a personality, lmfao. It’s a very clear set of learning disabilities centered around working memory, executive function, and tuning out
Rarely comment, but damn, you did a perfect job - I am at 8:01, haven't watched the video but had to pause and comment - until 8:01, everything was perfect; how you explain concepts and utilize tools ensures that we understand the concept in practice with ease! Great job, continue making videos; you should do consulting if you don't already do so. It's easy money with little hours with your skills and knowledge!
I just bought 2 Udemy courses, and after 5 hours, none of them talk so well about this. I appreciate it, and I will buy your book. Thanks for your content.
Let me see if I understand what’s going on here: 1) you have data you want to search semantically 2) you create a vector database capable of storing & querying data semantic search queries 3) you use OpenAI to process your data & convert it to vectors which can stored in your database 4) you store the data along with the OpenAI generated vectors 5) now you can search the data Is that all it is? I thought you were then going leverage this database to give chatgpt “long term memory” ( 0:20 ). What you’ve showed seems nice, but I don’t really see the point since most people/companies who have enough data that would need to be queried in this way would not be able to give it away to OpenAI to process.
what'd you mean "give it away to OpenAI" is everything shared with OpenAI accessible by the internal team or something? I'm pretty sure you can opt out of using your data to train their AI..at least that's the case with that chatbots.
You just need to code in chat logging that chunks the logs after they exceed the AI's short-term memory. You can also dynamically compress the logs to achieve higher efficiency.
Nice high quality video with clear explanation of concepts. This video is engaging for learners. I would say one of the best videos out there on vector embeddings. Good Job Adrian
Bare metal, removing all higher level obstructions going right down to the core. I love it the best understanding of what embedding’s earlier that I have seen great job.
That isn't a vector database. It's a relational database with vectors stored on a text column. In practice, you will have thousands of embeddings and performance will tank with this setup
From my investigation, Redis is an excellent vector store to be used in both development and production especially when it’s a local Dockerized instance
You are correct, but you know that! It’s indexing is not fast enough for many serious AI projects, and its single threaded architecture does not scale. Under the hood there are many other non-vector legacy issues.
Awesome thanks. Been studying calculus and linear algebra before I dive deep into AI. I will definitely be dealing with vector databases very soon and looking forward to it.
The voice recording and explanation is really clear - surprising how tone and voice plays a major role in understanding. Was watching another video which was equally good but somehow the slang and recording made it a bit difficult to understand. Thanks
Thank you, @AdrianTwarog. I wanted to learn how to store and retrieve embeddings in a vector database. This video helped me with that. The missing bit is how to use the retrieved embeddings for resource-augmented generation.
Also you can use in postman "Test", which can help you create a script to create a string with requested input and response data. Automate it! (If you need)
I like this video and I don't mind all the upselling. My only complaint is that if I pause the video for too long, it automatically sends me to another video in the series, which makes it hard to get back to where I was. You might assume it is user error, but it isn't. The automatic transferal and loss of context happens constantly with this UA-cam video, and I've never had the problem with any other UA-cam tutorial. I'm fine with the monetizing and upselling since it helps reward the content creator, I just wish it wouldn't keep making me lose my place in the tutorial.
For those who already had an OpenAi account and you are facing an error while posting the HTTP request, its because your free credit has expired. You will have to add a payment method or createa new account to get free credits agin and then everything will work fine according to this tutorial.
Great So bascially if i have create LLM for my company who has multiple documents , content i need to do it 1. Pass all the documents and get Embeddings from OPEN AI 2. Store all the Embeddings in a DB 3. Create an app to to search vector DB But my question is how it can think and reason. The above approach has great for search capability but how it think like Summaration , comprehension etc
Loved this tutorial Adrian, very straight forward and it worked the first time not like some others I've tried. Now for my question. I'm seeing this on February 2024. I did not know CHATGPT, BARD and those other AI apps until they hit the common pool that I must swim in. I take it that vectoring documents has been going on for awhile, outside of the math world. I knew of vectoring back from college in linear algebra. If this is the case, what I'm trying to do will not be new. I'm trying to vectorize my documents in order to practice doing this kind of work. So, are there IT companies out there doing this type of work already and can you name a few? How far have they gotten? Has someone already done the library of Congress for instance?
How efficient is the vector search if you need to go through all of the records every time you search? Shouldn't there be some dedicated field type for embeddings other than blob?
[Question] When input hello earth, "Hello World" scored 0.89, meanwhile "OpenAI Vectors and Embeedings are Easy!" scored 0.74. Which is quite close to the top rank text. But syntactically first and second returned text are very different. Somehow I expect the second text might scored 0.5 and below. Could you please share your thoughts on this Adrian? Thank you!
How in the world did it get 0.74 score (which pretty high on the scale for 0 to 1!) for the similarity of "Hello Earth" and "OpenAI vectors and embeddings are easy"? Is there anything in common between the two?
This tutorial is well explained. Thanks for that. But could you explain how to do this on scale? Is it possible to have a no code tool that companies can use to store their data in a vector database? Also, retrieving this info later? It seems that there must be easier solutions for this right? (while also keeping it safe to use).
I have followed along to 15 minutes so far. How come the scores are all fairly high, even when the search terms are not present in the database? My database included a quote from HHGG about the important of towels to hitch hikers. There were also two other rows of data containing no mention of towels. When I searched for the word "towel" the top match was 85% because it was the quote that contained the towel reference. Great! But I don't understand why the other scores were 75% and 73% though there was no mention of towels. If this was a traditional text search, those rows would not have been returned at all.
I get this message when I run the API. Do you need to pay OpenAI for it to work? Thanks! "error": {"message": "You exceeded your current quota, please check your plan and billing details.",
Does the chuck size have an affect on the quality or accuracy of the search result? Let's say I split a document into words AND in 200 word chucks. The vector results are stored in a vector db.
Bought the book. It ended on page 54, is there anything after 54 to 58? Last example was open ai fine tuning. It leaves the ft up on open ai site. How long will it be available there? Can it be brought down locally and be used in the future as local in combination with cloud model?
Very well explained, Thanks Adrian !! I have astaffing firm and I have a database of more than a million resumes. I m planning to create a resume search application for my recruiters. Do you think I should be using combination of Embeddings and Vector Database for above use case.
In the past, I learned Support Vector Machines for doing classification. At that time, I struggled to learn the concept, although I finally was able to implement it into a program using codes made by another party. The introduction of this video suddenly revived the memory and helped me better understand the concept of SVMs that I learned years ago. Is Postman completely free and can be used without any restrictions or limitations ? Is Single Store also completely free without any restrictions or limitations ?
I'm a little confused.. If I created embeddings and which I'm assuming is essentially training the openai model on a specific topic for my company. Would it be able to answer questions only on the specific topic it was trained for?
How would I go about weighing the results by other meta data? Say I have a bunch of videos, and I'm searching the title/description, but want to give some amount of preference to newer videos too.
OK, it is a good video on using OpenAI to create embedding via an API. But lets say, next week. Open AI's building are destroyed by a meteor! Now, I still want to create embeddings on my dev server. Is there a software I can download and run locally that I can use until a meteor crashed on my house?
That was the first video that actually gave me a understanding of how vector DB's kind of work. Thank you for sharing.
Key word being, "kind of" 😂😂
My ADHD normally overrides my concentration. Your tutorial pace, live coding, and narrative made me complete my 1st Open AI coded app - thank you!
Same
Don't use ADHD as an excuse, it ain't no sickness, just personality. Take it and make it your best quality.
@@nicholastroyandersen9505it’s…. Not a personality, lmfao. It’s a very clear set of learning disabilities centered around working memory, executive function, and tuning out
same and without knowing english
@@nicholastroyandersen9505it’s literally a neurological condition that can be seen on scans and measured… ignorant comment
Super high quality video right here. Good job Adrian
Hey I've seen your stuff too, it's great, thanks for the nice words!
I have seen multiple tutorials, this is by far the best and most concise, great work man
Rarely comment, but damn, you did a perfect job - I am at 8:01, haven't watched the video but had to pause and comment - until 8:01, everything was perfect; how you explain concepts and utilize tools ensures that we understand the concept in practice with ease! Great job, continue making videos; you should do consulting if you don't already do so. It's easy money with little hours with your skills and knowledge!
Incredible teaching skills. First time ever, I loved someone who can teach "ME" the way I always wanted. Thousand thumbs up Adrian!!
I just bought 2 Udemy courses, and after 5 hours, none of them talk so well about this. I appreciate it, and I will buy your book. Thanks for your content.
Adrian, your channel is a gem! I love the way you explain complex topics and the pace of your videos! Greetings from Poland!
Let me see if I understand what’s going on here:
1) you have data you want to search semantically
2) you create a vector database capable of storing & querying data semantic search queries
3) you use OpenAI to process your data & convert it to vectors which can stored in your database
4) you store the data along with the OpenAI generated vectors
5) now you can search the data
Is that all it is? I thought you were then going leverage this database to give chatgpt “long term memory” ( 0:20 ). What you’ve showed seems nice, but I don’t really see the point since most people/companies who have enough data that would need to be queried in this way would not be able to give it away to OpenAI to process.
what'd you mean "give it away to OpenAI" is everything shared with OpenAI accessible by the internal team or something? I'm pretty sure you can opt out of using your data to train their AI..at least that's the case with that chatbots.
You just need to code in chat logging that chunks the logs after they exceed the AI's short-term memory.
You can also dynamically compress the logs to achieve higher efficiency.
Nice high quality video with clear explanation of concepts. This video is engaging for learners. I would say one of the best videos out there on vector embeddings. Good Job Adrian
This is by far the most easiest & concise explanation. Thanks for creating this video
Love your thumbnails. Keeps getting better with each video 👍
Thanks, I try to make them as clear to what they video represents as possible!
Bare metal, removing all higher level obstructions going right down to the core. I love it the best understanding of what embedding’s earlier that I have seen great job.
That isn't a vector database. It's a relational database with vectors stored on a text column. In practice, you will have thousands of embeddings and performance will tank with this setup
What's a more ideal solution for storing vectors?
From my investigation, Redis is an excellent vector store to be used in both development and production especially when it’s a local Dockerized instance
Mongodb atlas is awesome for vectors. They have a new vector feature called knnbeta
Pinecone works too!
You are correct, but you know that! It’s indexing is not fast enough for many serious AI projects, and its single threaded architecture does not scale. Under the hood there are many other non-vector legacy issues.
This was a great overview Adrian!
Excellent overview! Very concise, clear and relevant! Great job! Thank you Adrian! 😊
This is the best video on openai embeddings I have ever seen, I am also a bit biased!
Very Good session Adrain... your way of teaching is keeping the people glued... Keep it up
This is awesome, perfect video for non-beginner developers to quickly grasp.
Learn vector embeddings using first principles. Always engaging, and very rewarding for the learner. Thank you!
Awesome thanks.
Been studying calculus and linear algebra before I dive deep into AI. I will definitely be dealing with vector databases very soon and looking forward to it.
The voice recording and explanation is really clear - surprising how tone and voice plays a major role in understanding. Was watching another video which was equally good but somehow the slang and recording made it a bit difficult to understand. Thanks
Thank you, @AdrianTwarog. I wanted to learn how to store and retrieve embeddings in a vector database. This video helped me with that. The missing bit is how to use the retrieved embeddings for resource-augmented generation.
I wish everyone could have presented like you, simply Super. Looking forward for more in similar way
Also you can use in postman "Test", which can help you create a script to create a string with requested input and response data. Automate it! (If you need)
Best AI video ever . Made it easy to understand with 2 simple concepts . Thanks man!
DaRabase ! I cannot watch more even if this will elevate my skills and work !!!
This course is gold! Thanks! I have done similar steps on Astra db and it was smooth
Omg, thanks for this video, very straight forward and easy to understand. Thanks!
Nice video, it would have been nice with a demonstration at the end or intro, keep up the good work.
Oh good suggestion, I’ll do that next time!!
Amazing tutorial! The way you explain is so easy and understandable!
Would be great to see a follow up video of practical applications using this.
The practical application are varied:
sentiment analysis
term search
Classification
Adrian, this is beautifully explained. Absolutely loved it :)
Thanks for Sharing. This was a great video that clearly illustrate vectorsdb, embeddings, and searching.
This tutorial was incredible - completely glued to it
Absolutely LOVE this. you're so clear and concise.
Good video on the basics of creating embeddings & vector DB
Damn, great crash course! Thanks a lot Adrian
Simple, concise, and has everything in it. Thank You
Finally, found a video with the appropriate detail. For me! 😊 Thank you!
I like this video and I don't mind all the upselling. My only complaint is that if I pause the video for too long, it automatically sends me to another video in the series, which makes it hard to get back to where I was. You might assume it is user error, but it isn't. The automatic transferal and loss of context happens constantly with this UA-cam video, and I've never had the problem with any other UA-cam tutorial. I'm fine with the monetizing and upselling since it helps reward the content creator, I just wish it wouldn't keep making me lose my place in the tutorial.
For those who already had an OpenAi account and you are facing an error while posting the HTTP request, its because your free credit has expired. You will have to add a payment method or createa new account to get free credits agin and then everything will work fine according to this tutorial.
Great
So bascially if i have create LLM for my company who has multiple documents , content i need to do it
1. Pass all the documents and get Embeddings from OPEN AI
2. Store all the Embeddings in a DB
3. Create an app to to search vector DB
But my question is how it can think and reason. The above approach has great for search capability but how it think like Summaration , comprehension etc
Wow, thanks I'm finally starting to get embeddings!
Wow! Easy, clear and to the point.
Well done, succinct, and excellent explainations of complex topics.
Will you create a second part of this video where PDF's are uploaded and then analyzed?
This is great. I had to learn this in a crunch and I grok it now.
Great content 👍👍👍, waiting for more OpenAI, AI related content
Loved this tutorial Adrian, very straight forward and it worked the first time not like some others I've tried. Now for my question. I'm seeing this on February 2024. I did not know CHATGPT, BARD and those other AI apps until they hit the common pool that I must swim in. I take it that vectoring documents has been going on for awhile, outside of the math world. I knew of vectoring back from college in linear algebra. If this is the case, what I'm trying to do will not be new. I'm trying to vectorize my documents in order to practice doing this kind of work. So, are there IT companies out there doing this type of work already and can you name a few? How far have they gotten? Has someone already done the library of Congress for instance?
Nice work! Thanks so much for this awesome demo.
Use Dark theme, because I watch this video in the night. It heats my eyes 👀
This video is very well explained
How efficient is the vector search if you need to go through all of the records every time you search? Shouldn't there be some dedicated field type for embeddings other than blob?
nice tutorial, i have a question for code completion which extension you use?
Love it.. it was far simpler than I thought..
Brilliant super simple and very easy to understand.
[Question] When input hello earth, "Hello World" scored 0.89, meanwhile "OpenAI Vectors and Embeedings are Easy!" scored 0.74. Which is quite close to the top rank text. But syntactically first and second returned text are very different. Somehow I expect the second text might scored 0.5 and below.
Could you please share your thoughts on this Adrian?
Thank you!
You would need to ask someone who built the transformers at openai.
Great tutorial man! thank you!
Perfect learning ❤🎉 master of learning ❤❤❤❤
How in the world did it get 0.74 score (which pretty high on the scale for 0 to 1!) for the similarity of "Hello Earth" and "OpenAI vectors and embeddings are easy"? Is there anything in common between the two?
Great tutorial!!! I will be buying your book.
Very interesting video , thank you
Absolutely amazing! Thank you so much for your work!
This tutorial is well explained. Thanks for that. But could you explain how to do this on scale? Is it possible to have a no code tool that companies can use to store their data in a vector database? Also, retrieving this info later?
It seems that there must be easier solutions for this right? (while also keeping it safe to use).
Fantastic tutorial and explanation!!
Great work presenting this!
Do you happen to know how similar or different this is from what Elasticsearch does when performing full-text search?
Great work! How do you make these nice presentations with the fancy arrows?
Thank you! Great walk through
thank you very much! super useful!
Very interesting video, but what are the prerequisites to understand & actually implement this ?
I have followed along to 15 minutes so far. How come the scores are all fairly high, even when the search terms are not present in the database? My database included a quote from HHGG about the important of towels to hitch hikers. There were also two other rows of data containing no mention of towels. When I searched for the word "towel" the top match was 85% because it was the quote that contained the towel reference. Great! But I don't understand why the other scores were 75% and 73% though there was no mention of towels. If this was a traditional text search, those rows would not have been returned at all.
Excellent. Thank you. Helped a lot.
isnt calculating the modulus of the subtraction of the vectors a more accurate way to find similarities?
I get this message when I run the API. Do you need to pay OpenAI for it to work? Thanks! "error": {"message": "You exceeded your current quota, please check your plan and billing details.",
Me too, you found a solution?
thanks for the tutorial can we use our own LLM like private GPT or Text-generation Web UI instead of OPENAI
Does the chuck size have an affect on the quality or accuracy of the search result? Let's say I split a document into words AND in 200 word chucks. The vector results are stored in a vector db.
great explanation ! thanks !!
Cool course. How does one connect it to a basic website?
Perfect explaination!
Bought the book. It ended on page 54, is there anything after 54 to 58?
Last example was open ai fine tuning.
It leaves the ft up on open ai site.
How long will it be available there?
Can it be brought down locally and be used in the future as local in combination with cloud model?
I’ll double check, and any updates will automatically be enabled on Gumroad!
@@AdrianTwaroghow to automate text importation with sql? Must one enter each text blob manually?
Very well explained
Brilliant stuff!
Excellent content, what changes for audio search?
Very well explained, Thanks Adrian !! I have astaffing firm and I have a database of more than a million resumes. I m planning to create a resume search application for my recruiters. Do you think I should be using combination of Embeddings and Vector Database for above use case.
Nice video. I love your work.
In the past, I learned Support Vector Machines for doing classification. At that time, I struggled to learn the concept, although I finally was able to implement it into a program using codes made by another party. The introduction of this video suddenly revived the memory and helped me better understand the concept of SVMs that I learned years ago.
Is Postman completely free and can be used without any restrictions or limitations ? Is Single Store also completely free without any restrictions or limitations ?
what are the prerequisites to understand the content in this video? And where can I learn them?
This is great stuff, thanks.
I'm a little confused.. If I created embeddings and which I'm assuming is essentially training the openai model on a specific topic for my company. Would it be able to answer questions only on the specific topic it was trained for?
interesting that expected vector array was places into SELECT section rather than WHERE section
How would I go about weighing the results by other meta data? Say I have a bunch of videos, and I'm searching the title/description, but want to give some amount of preference to newer videos too.
excellent. thx!
Crisp and to the point, thank you. Can I ask how you made the slides like the one at 0:52?
yes but how do you save a vector store ? ie export it to json for upload or finetuning into the main lm ?
fantastic job!
wow great video sir. Helped a lot. may i know what extension is being used in 16:40 ?
GitHub Copilot
OK, it is a good video on using OpenAI to create embedding via an API. But lets say, next week. Open AI's building are destroyed by a meteor! Now, I still want to create embeddings on my dev server. Is there a software I can download and run locally that I can use until a meteor crashed on my house?
You can have a Embeddings model locally, like Ollama or one of meta embeddings model, but those required pretty high config and GPU machines.
This is very useful. Could you also do embedings of CSV files? I have files amounting up to 5 million rows
this was awesome thx