@@nicholastroyandersen9505it’s…. Not a personality, lmfao. It’s a very clear set of learning disabilities centered around working memory, executive function, and tuning out
That isn't a vector database. It's a relational database with vectors stored on a text column. In practice, you will have thousands of embeddings and performance will tank with this setup
From my investigation, Redis is an excellent vector store to be used in both development and production especially when it’s a local Dockerized instance
You are correct, but you know that! It’s indexing is not fast enough for many serious AI projects, and its single threaded architecture does not scale. Under the hood there are many other non-vector legacy issues.
Let me see if I understand what’s going on here: 1) you have data you want to search semantically 2) you create a vector database capable of storing & querying data semantic search queries 3) you use OpenAI to process your data & convert it to vectors which can stored in your database 4) you store the data along with the OpenAI generated vectors 5) now you can search the data Is that all it is? I thought you were then going leverage this database to give chatgpt “long term memory” ( 0:20 ). What you’ve showed seems nice, but I don’t really see the point since most people/companies who have enough data that would need to be queried in this way would not be able to give it away to OpenAI to process.
what'd you mean "give it away to OpenAI" is everything shared with OpenAI accessible by the internal team or something? I'm pretty sure you can opt out of using your data to train their AI..at least that's the case with that chatbots.
You just need to code in chat logging that chunks the logs after they exceed the AI's short-term memory. You can also dynamically compress the logs to achieve higher efficiency.
Rarely comment, but damn, you did a perfect job - I am at 8:01, haven't watched the video but had to pause and comment - until 8:01, everything was perfect; how you explain concepts and utilize tools ensures that we understand the concept in practice with ease! Great job, continue making videos; you should do consulting if you don't already do so. It's easy money with little hours with your skills and knowledge!
Bare metal, removing all higher level obstructions going right down to the core. I love it the best understanding of what embedding’s earlier that I have seen great job.
I just bought 2 Udemy courses, and after 5 hours, none of them talk so well about this. I appreciate it, and I will buy your book. Thanks for your content.
Nice high quality video with clear explanation of concepts. This video is engaging for learners. I would say one of the best videos out there on vector embeddings. Good Job Adrian
Thank you, @AdrianTwarog. I wanted to learn how to store and retrieve embeddings in a vector database. This video helped me with that. The missing bit is how to use the retrieved embeddings for resource-augmented generation.
Also you can use in postman "Test", which can help you create a script to create a string with requested input and response data. Automate it! (If you need)
How efficient is the vector search if you need to go through all of the records every time you search? Shouldn't there be some dedicated field type for embeddings other than blob?
[Question] When input hello earth, "Hello World" scored 0.89, meanwhile "OpenAI Vectors and Embeedings are Easy!" scored 0.74. Which is quite close to the top rank text. But syntactically first and second returned text are very different. Somehow I expect the second text might scored 0.5 and below. Could you please share your thoughts on this Adrian? Thank you!
The voice recording and explanation is really clear - surprising how tone and voice plays a major role in understanding. Was watching another video which was equally good but somehow the slang and recording made it a bit difficult to understand. Thanks
How in the world did it get 0.74 score (which pretty high on the scale for 0 to 1!) for the similarity of "Hello Earth" and "OpenAI vectors and embeddings are easy"? Is there anything in common between the two?
Awesome thanks. Been studying calculus and linear algebra before I dive deep into AI. I will definitely be dealing with vector databases very soon and looking forward to it.
For those who already had an OpenAi account and you are facing an error while posting the HTTP request, its because your free credit has expired. You will have to add a payment method or createa new account to get free credits agin and then everything will work fine according to this tutorial.
I get this message when I run the API. Do you need to pay OpenAI for it to work? Thanks! "error": {"message": "You exceeded your current quota, please check your plan and billing details.",
I like this video and I don't mind all the upselling. My only complaint is that if I pause the video for too long, it automatically sends me to another video in the series, which makes it hard to get back to where I was. You might assume it is user error, but it isn't. The automatic transferal and loss of context happens constantly with this UA-cam video, and I've never had the problem with any other UA-cam tutorial. I'm fine with the monetizing and upselling since it helps reward the content creator, I just wish it wouldn't keep making me lose my place in the tutorial.
Great So bascially if i have create LLM for my company who has multiple documents , content i need to do it 1. Pass all the documents and get Embeddings from OPEN AI 2. Store all the Embeddings in a DB 3. Create an app to to search vector DB But my question is how it can think and reason. The above approach has great for search capability but how it think like Summaration , comprehension etc
Does the chuck size have an affect on the quality or accuracy of the search result? Let's say I split a document into words AND in 200 word chucks. The vector results are stored in a vector db.
Bought the book. It ended on page 54, is there anything after 54 to 58? Last example was open ai fine tuning. It leaves the ft up on open ai site. How long will it be available there? Can it be brought down locally and be used in the future as local in combination with cloud model?
This tutorial is well explained. Thanks for that. But could you explain how to do this on scale? Is it possible to have a no code tool that companies can use to store their data in a vector database? Also, retrieving this info later? It seems that there must be easier solutions for this right? (while also keeping it safe to use).
Loved this tutorial Adrian, very straight forward and it worked the first time not like some others I've tried. Now for my question. I'm seeing this on February 2024. I did not know CHATGPT, BARD and those other AI apps until they hit the common pool that I must swim in. I take it that vectoring documents has been going on for awhile, outside of the math world. I knew of vectoring back from college in linear algebra. If this is the case, what I'm trying to do will not be new. I'm trying to vectorize my documents in order to practice doing this kind of work. So, are there IT companies out there doing this type of work already and can you name a few? How far have they gotten? Has someone already done the library of Congress for instance?
I'm a little confused.. If I created embeddings and which I'm assuming is essentially training the openai model on a specific topic for my company. Would it be able to answer questions only on the specific topic it was trained for?
How would I go about weighing the results by other meta data? Say I have a bunch of videos, and I'm searching the title/description, but want to give some amount of preference to newer videos too.
2:30 so he's saying they interviewed the other 2 criminals but havent decided to charge them... So only the homeowner is being charged. Am I hearing him right?
I am looking forward to generate a pretty lengthy json about 25k tokens, None of the llm models currently support that much output response tokens, do you think is it possible if i somehow get embeddings in response which later on i can convert to json then my aim to generate 25k tokens could be possible. Because embeddings will take lesser token size?
That was the first video that actually gave me a understanding of how vector DB's kind of work. Thank you for sharing.
Key word being, "kind of" 😂😂
My ADHD normally overrides my concentration. Your tutorial pace, live coding, and narrative made me complete my 1st Open AI coded app - thank you!
Same
Don't use ADHD as an excuse, it ain't no sickness, just personality. Take it and make it your best quality.
@@nicholastroyandersen9505it’s…. Not a personality, lmfao. It’s a very clear set of learning disabilities centered around working memory, executive function, and tuning out
same and without knowing english
@@nicholastroyandersen9505it’s literally a neurological condition that can be seen on scans and measured… ignorant comment
Super high quality video right here. Good job Adrian
Hey I've seen your stuff too, it's great, thanks for the nice words!
I have seen multiple tutorials, this is by far the best and most concise, great work man
Incredible teaching skills. First time ever, I loved someone who can teach "ME" the way I always wanted. Thousand thumbs up Adrian!!
That isn't a vector database. It's a relational database with vectors stored on a text column. In practice, you will have thousands of embeddings and performance will tank with this setup
What's a more ideal solution for storing vectors?
From my investigation, Redis is an excellent vector store to be used in both development and production especially when it’s a local Dockerized instance
Mongodb atlas is awesome for vectors. They have a new vector feature called knnbeta
Pinecone works too!
You are correct, but you know that! It’s indexing is not fast enough for many serious AI projects, and its single threaded architecture does not scale. Under the hood there are many other non-vector legacy issues.
Let me see if I understand what’s going on here:
1) you have data you want to search semantically
2) you create a vector database capable of storing & querying data semantic search queries
3) you use OpenAI to process your data & convert it to vectors which can stored in your database
4) you store the data along with the OpenAI generated vectors
5) now you can search the data
Is that all it is? I thought you were then going leverage this database to give chatgpt “long term memory” ( 0:20 ). What you’ve showed seems nice, but I don’t really see the point since most people/companies who have enough data that would need to be queried in this way would not be able to give it away to OpenAI to process.
what'd you mean "give it away to OpenAI" is everything shared with OpenAI accessible by the internal team or something? I'm pretty sure you can opt out of using your data to train their AI..at least that's the case with that chatbots.
You just need to code in chat logging that chunks the logs after they exceed the AI's short-term memory.
You can also dynamically compress the logs to achieve higher efficiency.
This is by far the most easiest & concise explanation. Thanks for creating this video
Adrian, your channel is a gem! I love the way you explain complex topics and the pace of your videos! Greetings from Poland!
This was a great overview Adrian!
Rarely comment, but damn, you did a perfect job - I am at 8:01, haven't watched the video but had to pause and comment - until 8:01, everything was perfect; how you explain concepts and utilize tools ensures that we understand the concept in practice with ease! Great job, continue making videos; you should do consulting if you don't already do so. It's easy money with little hours with your skills and knowledge!
Bare metal, removing all higher level obstructions going right down to the core. I love it the best understanding of what embedding’s earlier that I have seen great job.
This is awesome, perfect video for non-beginner developers to quickly grasp.
I just bought 2 Udemy courses, and after 5 hours, none of them talk so well about this. I appreciate it, and I will buy your book. Thanks for your content.
Love your thumbnails. Keeps getting better with each video 👍
Thanks, I try to make them as clear to what they video represents as possible!
Nice high quality video with clear explanation of concepts. This video is engaging for learners. I would say one of the best videos out there on vector embeddings. Good Job Adrian
Excellent overview! Very concise, clear and relevant! Great job! Thank you Adrian! 😊
Thank you, @AdrianTwarog. I wanted to learn how to store and retrieve embeddings in a vector database. This video helped me with that. The missing bit is how to use the retrieved embeddings for resource-augmented generation.
This is the best video on openai embeddings I have ever seen, I am also a bit biased!
Very Good session Adrain... your way of teaching is keeping the people glued... Keep it up
Omg, thanks for this video, very straight forward and easy to understand. Thanks!
I wish everyone could have presented like you, simply Super. Looking forward for more in similar way
Also you can use in postman "Test", which can help you create a script to create a string with requested input and response data. Automate it! (If you need)
Good video on the basics of creating embeddings & vector DB
Learn vector embeddings using first principles. Always engaging, and very rewarding for the learner. Thank you!
How efficient is the vector search if you need to go through all of the records every time you search? Shouldn't there be some dedicated field type for embeddings other than blob?
Thanks for Sharing. This was a great video that clearly illustrate vectorsdb, embeddings, and searching.
Would be great to see a follow up video of practical applications using this.
The practical application are varied:
sentiment analysis
term search
Classification
[Question] When input hello earth, "Hello World" scored 0.89, meanwhile "OpenAI Vectors and Embeedings are Easy!" scored 0.74. Which is quite close to the top rank text. But syntactically first and second returned text are very different. Somehow I expect the second text might scored 0.5 and below.
Could you please share your thoughts on this Adrian?
Thank you!
You would need to ask someone who built the transformers at openai.
Adrian, this is beautifully explained. Absolutely loved it :)
This tutorial was incredible - completely glued to it
The voice recording and explanation is really clear - surprising how tone and voice plays a major role in understanding. Was watching another video which was equally good but somehow the slang and recording made it a bit difficult to understand. Thanks
How in the world did it get 0.74 score (which pretty high on the scale for 0 to 1!) for the similarity of "Hello Earth" and "OpenAI vectors and embeddings are easy"? Is there anything in common between the two?
Nice video, it would have been nice with a demonstration at the end or intro, keep up the good work.
Oh good suggestion, I’ll do that next time!!
Simple, concise, and has everything in it. Thank You
Best AI video ever . Made it easy to understand with 2 simple concepts . Thanks man!
Absolutely LOVE this. you're so clear and concise.
Amazing tutorial! The way you explain is so easy and understandable!
This course is gold! Thanks! I have done similar steps on Astra db and it was smooth
Awesome thanks.
Been studying calculus and linear algebra before I dive deep into AI. I will definitely be dealing with vector databases very soon and looking forward to it.
For those who already had an OpenAi account and you are facing an error while posting the HTTP request, its because your free credit has expired. You will have to add a payment method or createa new account to get free credits agin and then everything will work fine according to this tutorial.
I get this message when I run the API. Do you need to pay OpenAI for it to work? Thanks! "error": {"message": "You exceeded your current quota, please check your plan and billing details.",
Me too, you found a solution?
nice tutorial, i have a question for code completion which extension you use?
I like this video and I don't mind all the upselling. My only complaint is that if I pause the video for too long, it automatically sends me to another video in the series, which makes it hard to get back to where I was. You might assume it is user error, but it isn't. The automatic transferal and loss of context happens constantly with this UA-cam video, and I've never had the problem with any other UA-cam tutorial. I'm fine with the monetizing and upselling since it helps reward the content creator, I just wish it wouldn't keep making me lose my place in the tutorial.
isnt calculating the modulus of the subtraction of the vectors a more accurate way to find similarities?
Wow, thanks I'm finally starting to get embeddings!
Wow! Easy, clear and to the point.
Great
So bascially if i have create LLM for my company who has multiple documents , content i need to do it
1. Pass all the documents and get Embeddings from OPEN AI
2. Store all the Embeddings in a DB
3. Create an app to to search vector DB
But my question is how it can think and reason. The above approach has great for search capability but how it think like Summaration , comprehension etc
Will you create a second part of this video where PDF's are uploaded and then analyzed?
Does the chuck size have an affect on the quality or accuracy of the search result? Let's say I split a document into words AND in 200 word chucks. The vector results are stored in a vector db.
Finally, found a video with the appropriate detail. For me! 😊 Thank you!
Brilliant super simple and very easy to understand.
Bought the book. It ended on page 54, is there anything after 54 to 58?
Last example was open ai fine tuning.
It leaves the ft up on open ai site.
How long will it be available there?
Can it be brought down locally and be used in the future as local in combination with cloud model?
I’ll double check, and any updates will automatically be enabled on Gumroad!
@@AdrianTwaroghow to automate text importation with sql? Must one enter each text blob manually?
what are the prerequisites to understand the content in this video? And where can I learn them?
Nice work! Thanks so much for this awesome demo.
This tutorial is well explained. Thanks for that. But could you explain how to do this on scale? Is it possible to have a no code tool that companies can use to store their data in a vector database? Also, retrieving this info later?
It seems that there must be easier solutions for this right? (while also keeping it safe to use).
This is great. I had to learn this in a crunch and I grok it now.
Perfect learning ❤🎉 master of learning ❤❤❤❤
thanks for the tutorial can we use our own LLM like private GPT or Text-generation Web UI instead of OPENAI
Great work! How do you make these nice presentations with the fancy arrows?
Loved this tutorial Adrian, very straight forward and it worked the first time not like some others I've tried. Now for my question. I'm seeing this on February 2024. I did not know CHATGPT, BARD and those other AI apps until they hit the common pool that I must swim in. I take it that vectoring documents has been going on for awhile, outside of the math world. I knew of vectoring back from college in linear algebra. If this is the case, what I'm trying to do will not be new. I'm trying to vectorize my documents in order to practice doing this kind of work. So, are there IT companies out there doing this type of work already and can you name a few? How far have they gotten? Has someone already done the library of Congress for instance?
Great work presenting this!
Do you happen to know how similar or different this is from what Elasticsearch does when performing full-text search?
Great tutorial!!! I will be buying your book.
yes but how do you save a vector store ? ie export it to json for upload or finetuning into the main lm ?
Well done, succinct, and excellent explainations of complex topics.
How do you interact with non text, like images content on a document?
I'm a little confused.. If I created embeddings and which I'm assuming is essentially training the openai model on a specific topic for my company. Would it be able to answer questions only on the specific topic it was trained for?
Great tutorial man! thank you!
How can I train my own ai using tensorflow to generate I images and text
hmmm, maybe I need to make a video on this?!
@@AdrianTwarog please sir and sorry for late
Very interesting video, but what are the prerequisites to understand & actually implement this ?
Perfect explaination!
dot_product is a function offered by this database for vector searching, ranking etc.. ?
Love it.. it was far simpler than I thought..
How would I go about weighing the results by other meta data? Say I have a bunch of videos, and I'm searching the title/description, but want to give some amount of preference to newer videos too.
Brilliant stuff!
2:30 so he's saying they interviewed the other 2 criminals but havent decided to charge them... So only the homeowner is being charged. Am I hearing him right?
Very interesting video , thank you
how does SingleStore know the embeddings returned from OpenAI and searches it correctly in its vector db?
Thank you! Great walk through
Fantastic tutorial and explanation!!
Hey man, what are those fonts you've used in this video?
Mine seemed to only come up with a paid postman version. Maybe its based on location?
Under downloads you can do so freely!
Great content 👍👍👍, waiting for more OpenAI, AI related content
Hi Bro,
What is the extension u used in the vs code for the code suggestions?
Absolutely amazing! Thank you so much for your work!
Cool course. How does one connect it to a basic website?
Did this become obsolete with GPT builder?
Is there any way to obtain embeddings of gpts from images?
great explanation ! thanks !!
Excellent content, what changes for audio search?
What is the quickest way to feed recognition or pattern braking data into the system?. Or just lower the AI endorphine levels hahaha.
Very well explained
would it be possible to use this for an AI NPC for training purposes in XR space for example?
wow great video sir. Helped a lot. may i know what extension is being used in 16:40 ?
GitHub Copilot
thank you very much! super useful!
Any link to the digital book ?
Good idea, Added!
Excellent. Thank you. Helped a lot.
excellent. thx!
Nice video. I love your work.
I am looking forward to generate a pretty lengthy json about 25k tokens, None of the llm models currently support that much output response tokens, do you think is it possible if i somehow get embeddings in response which later on i can convert to json then my aim to generate 25k tokens could be possible. Because embeddings will take lesser token size?