RAG Explained
Вставка
- Опубліковано 21 лис 2024
- Get the interactive demo → ibm.biz/BdmPEb
Learn about the technology → ibm.biz/BdmPEp
Oftentimes, GAI and RAG discussions are interconnected. Learn more about about RAG is and how it works alongside your databases, LLMs and vector databases for better results with Luv Aggarwal and Shawn Brennan.
AI news moves fast. Sign up for a monthly newsletter for AI updates from IBM → ibm.biz/BdmP2c
The most important step here is left to the end. You can only use RAG with a transparent or locally built LLM.
Very simple and clear explanation.. cheers to IBM
Hi I have few Questions , Please find time to answer
0. Are we filtering the Data in the Vector DB ? (If Yes? then )
1. How are we filtering the relevant data from our vector DB to augment to our prompt for the LLM ?
1. 1 who is doing this process , another LLM, our own code , or some-different tool?
1.2 Are we feeding the complete data as whole to the LLM?
1.2 if we are filtering the vector data using Rule Based Mechanism then what will be the use case of LLM , how is the power of LLM being drawn if we are the one who is making the decision what to feed as a relevant data to the LLM?
Hi This is what my understanding
Storing Data as Embeddings:
Correctness: Storing data (documents, images, etc.) as embeddings in a Vector DB is a valid approach. Embeddings represent the data in a high-dimensional vector space, capturing its semantic meaning.
Consideration: Ensure that the embedding model you use is appropriate for your data type. For images, you might use a different model (e.g., CLIP) compared to text embeddings.
Searching with Embeddings:
Correctness: Converting the search query into embeddings and then comparing these embeddings with those stored in your Vector DB is correct. This allows for semantic search, which is more effective than keyword-based search.
Consideration: Ensure that the conversion process and similarity calculations (e.g., cosine similarity) are implemented correctly. The returned plain text should be accurately relevant to the search query.
Summarization by LLM:
Correctness: Sending the retrieved plain text content to an LLM for summarization is appropriate. LLMs are designed to generate summaries and provide concise explanations based on the input text.
Consideration: Ensure that the LLM is correctly configured for summarization tasks. Provide clear instructions or prompts to achieve the desired summarization quality.
Returning Summarized Text to User:
Correctness: Receiving the summarized text from the LLM and returning it to the user is the final step in the process. This is standard practice for providing user-friendly summaries.
Consideration: Validate that the summarized content meets user expectations and provides accurate, meaningful information.
1. We Store all out relevant data in Vector DB ..like documents , images etc as part of Embeddings
2. When User Searches, it will not hit LLB directly, it will convert our search into Embeddings and return the resutlt with plain text
3. Then we send the same to text for summarization to LLM
4. Then LLM returns the summarized text back to user
1. Vector DB may or may not stored relevant information related to the question. 2. LLM may already have information or more accurate information to the question. So RAG may not always be helpful for GenAi applications
Hey! If I ask a RAG-based language model, "Tell me the features of the iPhone 17," what will it tell me? Will it say it doesn't know or will it hallucinate? I understand that once the iPhone 17 is released, the database will be updated to provide the correct information. But what happens if I ask about it before its release?
I can see two scenarios here: if it is indeed RAG-based, then you have provided info about the yet-to-be-released iPhone 17. So the LLM will respond based on that.
If you don't have it in your additional documents/vector DBs, then I'd recommend you to have always added something along the lines of "only answer with facts you have access to" to your system prompt and to set Temperature to a low number. (Temperature is a parameters in LLMs that defines how "creative" the model can be)
Great question and it highlights the importance of having experts in GenAI guiding enterprises on how to implement this in a way that suits their use cases.
great quick session, thanks !!
Thought Shawn was very flirty until I realised he didn't say "love" but "Luv"
Same Here XD
Very clear explanation, thanks! How do you manage to avoid using "blackbox" models?
I suggest prioritizing a terminal rather than a drawing board.
Clean database, stable generator and clever retriever.
There is good point of halucinations of AI and the video unfortunately does not address it. The data governance is not addressing this issue we still can have a scenario where input is valid but output generated by AI is a garbage.
Shawn & Luv!!!! Awesome job!!!!
Very Well explained! Thank you so much.
Poetic journey: the essence of refund details and expected actions
Thank you for your sharing! Very helpful and easy to follow. Just one question, is there anyway we can test or reinforcement train the model to make sure the outputs are appropriate?
Bias within LLM's is a topic that needs more like shed on.
The unhappiness in their eyes and frowns tell the pain of working for Artificial Intelligence jobs
Is there a pane of glass in front of them, or is this some other technology?
yes it's a glass
Clear and simple, thank you guys and thank you IBM
Nice explanation. Well done boys 😁
Good job. I still need to learn more about data accuracy in a LLM.
Neat and detail explanation.
IMO. Data Gov Management seems to be the same as correct Database data input workflow. Prompt is a another way to query the database. LLM avoid use of query language to interrogate the DB. In this way a common people can query the DB. Appears to be a good idea to fine tuning a large LLM. But is not a fine tuned train is more similar to data embedding. In RAG how the vector database interact with LLM ? The vector database grows the LLM's latent space ? Is there a possibility that the LLM parameters can overlaps vector parameters making a mix of knowledge?
Thanks for the straight forward description of RAG.
good explanation
The guy on the left wanted to laugh out loud at that sketch 😂
Cool explanation
nice work.
Great backwards writing skills!
Awesome video
So with a rag approach, can I say that we can update the original vector db with our own processed data?
Great explanation. Thank you
Interesting , thanks both
Excellent video, love it
Nobody's ever been fired for buying RAGs from IBM.
... yet
Love the 1980s meme.
Thank for sharing !!
Exactly love.
Gotcha.
Did you need to learn to write backwards for this videos?? or is there a product that help you with this nice board?
You record the video then mirror the image. They simple write on the clear board
ua-cam.com/video/Uoz_osFtw68/v-deo.html
Thanks guys, very clear!
Meh, they leave the biggest question unanswered, how are enterprises expected to govern the data that was used to train the LLM ?
Do these guys write on the whiteboard backwards or how does that work?
Yes they learned to write backwards for this video because it's cheaper to do that than to run the algorithm to flip a video horizontally
@@JShaker 😂😂😂
Informative
Does not address how do you validate the Q1 results returned are accurate. You should have built in a process parallel to querying the LLM of actually querying the results and training the LLM to address any discrepancies, if that is possible or correct them.
The demanding world of refund specifics and anticipated actions
This was an excellent explanation! Thank you.
Ironically, because transfers between banks and cards always go so smoothly, don't they? But seriously, it's all good.
Do LLMs store our sensitive data when using RAG?
No ,
Vector DB stores all the sensitive and confidential data
Once obtained send that data to LLM to get it summarized because Vector DB would have given only the connecting data for the string
Behind the scenes: Binance CEO shares insights into future developments in an exclusive interview
How to do this video? screen as the board.
good
ok, gotcha 👌
My left arm started tingling. Quite hard to concentrate now. 😅
And then a wide spread global epidemic crisis is brought to light wherein our gold standard "books" (peer reviewed journals) are rife with bad and corrupt data due to mismatched incentivization and misalignment of directives; and we then realize...how much good data through science do we really have? Shame we polluted the books we are supposed to be able to trust now that we have this magnificent technology here. 😭
Dude, keep on topic. This isn’t the place for your grievances
i know right? it's too bad we not have unbiased data to make the most of this technology :(
🤔I think Luv saw the connection the entire time
7:10 sounds like support for open source.
exactly love
Alverta Field
8615 Balistreri Forge
lol. must be annoying to talk to "Luv". "Hi, Luv", "Exactly, Luv"
Etha Road
1411 Carson River
Dibbert Throughway
Sipes Forks
Dayna Course
Malika Spur
946 Cremin Ranch
Tom Harbor
👏👏
Pacocha Estates
kabhi haans bhi liya karo..
(Smile a bit bros)
How in the world is this dude writing inverted for us too read straight lol
they mirror the video) you can notice that most of the people writing on a glass board are left handed(in reality 90% of planet’s population is right handed), that’s also because they mirrored the video
It's a skill only left-handed people have
Terrible Analogy! When a journalist is ants to do research, he goes to a library and asks the librarian??
As opposed to doing a google search ?
This scenario is from last century before Luv was born ?
Single take artists
Boring
77091 Schowalter Passage