One of the best tutorials I've seen in my entire life. Everything is clear, no shortcuts, no analogies, no abtractions. we learn a lot of things from different fields along with production best practices. This is how tutorials should be. Thank you so much ! Looking forward to learn more from you
This video is mindblowing. 10/10 I'm a Tech Lead, with more than 15years in software engineering and I can already feel that this video is a game changer for me. So much crucial/game changing infos gathered in an incredible way, with amazing presentation and pace.
Fantastic in depth walkthrough with code examples and reasoning behind implementation decision. Helped me understand supabase, its services and architecture and how things fit together much more. Thank you!
this was extremely useful, kudos to you for creating the video! as an ai engineer, it was a huge learning opportunity for me to build end to end applications, thanks man!
This is one amazing video. Thanks so much! One suggestion, will be super cool to have aversion if this video using langchain as well. There are a lot of great benefits using it instead of going directly to Open AI (like the ability to easily switch or use multiple model providers)
Thank you for the great video!! Would like to see more videos on implementing Supabase using Python (not sure about the demand actually) if possible. :)
There were so many parts to like in this video, my favourite was how to extract the authorisation headers in making the call to a REST endpoint. Will probably implement the endpoint in python with Fast API rather than Deno. 😂
It's brilliant. Just keep me breath outhht between important pieces of code! …next time! Dynamic of video is really good - but pieces where I need to learn something new, wish to look for references, sources… I cannot ever hit space… tracking back 🙂 BTW Thank you for so great tutorial! 🙂
@@gregnr I think it's the Deno? I keep getting errors like these "Type error: Cannot find module 'common-tags' or its corresponding type declarations." even if I've installed them. =/
@@tamsssss6765 got it - just to confirm, are you getting those errors at runtime, or just in your editor (ie. VS Code)? If it's in VS Code, can you double check you have the Deno extension installed? Without that extension, VS Code doesn't handle Deno dependency management correctly.
This is great. What changes would need to be made to use this with an open source model like Mistral or Llama 2? Is it just whatever model library is used and the embedding model that goes with it?
Hey did you get any further with this? I'm building a similar model using Mistral 7B - would really like to hear how you went about with using a local llm
Great stuff, thanks a lot! One question. What's the point of deploying Deno edge functions (and calling them with pg_net inside postgres) instead of simply using Next.js actions for processing files after upload? It adds a lot of complexity imo. Any real benefits?
What an excellent video! Amazing work - I love all the "rabbitholes" which are all very important. I have two questions though; instead of using Supabase functions, one could use NextJS Route Handlers, right? Also; are there some open source alternatives to OpenAI LLM that could easily be integrated instead? Thanks for this video!
How would you handle this if you actually wanted to reference the document/location where the RAG has pulled the info from (ie. like a references list on the front end)?
Yep this is a great question. We are actually in the process of bringing this type of functionality to the Supabase docs via Supabase AI assistant. The strategy more or less comes down to: 1. During the RAG prompt injection step, prefix each section with a heading (or id, link, storage path, etc) that references the document it came from 2. As part of the initial prompt, ask the LLM to insert references to these respective section headings throughout its response 3. On the frontend, parse the response coming back to extract these references, replace with a [1], [2], [3], etc, and add them as footnotes
Sweet that makes a lot of sense. I pulled something similar together using pinecone but found I was double handling a lot of the prompt injection and then parsing the references. The way you have described it within the Supabase framework makes a lot of sense.@@gregnr
Amazing tutorial! Could you make a similar tutorial but for using supabase with AI agenst (+ RAG) that use function calling. For example, how to create a chatbot that can add tasks to our to do list or complete tasks on our todo list.
Great tutorial! Do you start running into problems with chat conversations as time goes on ... given you are including all previous messages and the limited window that OpenAI provides? How do you handle that? Just truncate it?
One thing when reseting the DB because of Todos, there is a directive how to do it locally but not via the cloud. `pnpx supabase db reset` doesn't work unfortunately and I can't find it in the docs.
Getting this as well... did you figure it out? Edit: actually, here is what I did: 1. npx supabase db reset --linked 2. Deleted 'files' from storage in cloud. 3. npx supabase db push. Both of my migrations (the files and documents) were applied.
@@sumodd sorry I accidently replied to a wrong video on another issue 🤦🏻Actually the wrong was on my side, since db reset is for the docker, I think you just need to do db push
interesting video!. so the whole reason for using RAG here is to minimize the token inputs when eventually passing it to GPT? (also maybe getting more accurate results because of using a specific embedding model that's better than GPT)
I think something has broken with the repo. The Chat function for example no longer deploys (i have pinpointed it to the AI library import from Vercel) Can you or anyone else reproduce this?
Very good tutorial. Only problem I have is that I don't get embeddings generated for every item in the documents_sections. I followed the code to the letter and it only generates the first 5 embeddings.
Thanks for making this video for my favorite platform. I have followed it along and ported this method to use Google gemini api but I am having a weird problem. In chat function in the part where we add injectedDocuments to the system prompt. It runs fine for the first time, i checked by console.logging the completionMessages and all injected Documents and system prompt is added to the first prompt by the user but for all later messages neither the system prompt nor the injected documents are added and are not added to the user messages. However strangely enough the output from the gemini pro clearly indicates that it is getting the context and gives spot on replies. (I compared with the output from gemini pro without any context and answers were way different). Can somebody guide me if this is default behavior of vercel's ai sdk or there is problem with my code?
Hey, i am beginner i received api key and base url generated by my organization , this tutorial just include API key only directly by open ai , I need tutorial which can help me to create the chat bot with my API key and base URL can anyone suggest me tutorial or code base ?
In the embbeding column 7 - 10 rows are empty, but the rest is filled. Why is this? Has nothing to do with the code, since the nummer of blank space is different each time.
I followed the entire tutorial, very good, thank you for this. I am a beginner with nextjs and supabase, two things that I can't get working at the end: my supabase does not create the sections and therefore embeddings when a file is uploaded, I guess I missed something with either the migration or edge function? also, the chat doesn't work because CORS blocks it when its coming from the supabase cloud, how do I configure CORS on the cloud dashboard?
I ran into this as well developing LOCALLY. Make sure to open a new terminal window. Navigate to your project directory and run "npx supabase functions serve".
Thank you so much one of the Best Tutorial. Query - when we are using cloud based option means files uploaded to supabase server and embeddings as well. Just want to confirm how secured our documents will be and can we use for financial and health care files as same application. Really interested to signup with Supabase if this query is solved.
PDFs have notoriously been difficult to pull text from in a sane way (because there's lots of variance between PDFs, and some PDFs embed text while others are just images). One solution we're working on is using GPT's new vision model to extract the text - still WIP right now, but stay tuned!
@@namesare4fools if you don’t care about the details, you can 100% just clone the repo and run it. Yes, Supabase’s edge runtime is built on Deno - you wouldn’t run this in vanilla Deno yourself though, instead use the supabase CLI to serve the edge function as shown in the video/readme.
Give this man a raise. Good speed and complete. 9.99/10 - nobody gets a 10
so the scale is from 0 - 9.99 🙂
@@HashimWarren absolutely!😉
Excellent tutorial, so well explained. Thanks so much
This is truly an unbelievable tutorial. College professors could learn a lot from you. The fact that this quality is free is mind blowing
this is one of the best video came out of Supabase. Please do more such detailed videos. thanks
Good to know this style is helpful - thanks for watching!
One of the best tutorials I've seen in my entire life. Everything is clear, no shortcuts, no analogies, no abtractions. we learn a lot of things from different fields along with production best practices.
This is how tutorials should be.
Thank you so much ! Looking forward to learn more from you
Please do more videos like this, truly amazing work. This helps me prevent a whole lot of headaches. I love Supabase 💚
We love you too 💚
This video is mindblowing. 10/10
I'm a Tech Lead, with more than 15years in software engineering and I can already feel that this video is a game changer for me.
So much crucial/game changing infos gathered in an incredible way, with amazing presentation and pace.
Fantastic in depth walkthrough with code examples and reasoning behind implementation decision. Helped me understand supabase, its services and architecture and how things fit together much more. Thank you!
I had this in my todo, it's mind blowing. Well detailed, great speed, tis is amazing. My only regret is not have watched it before. Thank you so much
Joining the crowd - this is one of the best tutorials I have ever seen (and I have seen many). Great Job!
And the first one I have ever commented on
I'm sold! Diving deeper into Supabase because of this :) Great 2 hours content!
This an amazing guide. Like absolutely amazing, bravo.
Thanks! Don't hesitate to give me a shout if you have any questions/issues
Next time a recommendation system.
Great idea!
Yes please
Stellar presentation! The presenter knows his stuff. Can’t fake this level of experience. Thank you!
this was extremely useful, kudos to you for creating the video! as an ai engineer, it was a huge learning opportunity for me to build end to end applications, thanks man!
Man you are consistently saving me when I hit a wall on my projects. Thank you!!!!!!
This is the best video tutorial I have ever seen.
Superbase. I freaking love you. Long live the king
Incredible clarity! More like these please.
This is one amazing video. Thanks so much!
One suggestion, will be super cool to have aversion if this video using langchain as well.
There are a lot of great benefits using it instead of going directly to Open AI (like the ability to easily switch or use multiple model providers)
Thank you for the great video!! Would like to see more videos on implementing Supabase using Python (not sure about the demand actually) if possible. :)
There were so many parts to like in this video, my favourite was how to extract the authorisation headers in making the call to a REST endpoint. Will probably implement the endpoint in python with Fast API rather than Deno. 😂
Hey!! that's a good thought. Did you do it?
Thank you so much for this tutorial! You are an amazing teacher
It's brilliant. Just keep me breath outhht between important pieces of code! …next time!
Dynamic of video is really good - but pieces where I need to learn something new, wish to look for references, sources… I cannot ever hit space… tracking back 🙂
BTW Thank you for so great tutorial! 🙂
Thank you very much - you are such a great teacher 🧑🏫
Without this video, how could we possibly learn to do this? 😢
Vaults looks cool! More more more!
Comes in very handy in some situations 👍 thanks for watching!
This is exactly what Im looking for! Thank you! Now only if I could get it to work locally =(
Glad it resonates! What issues are you having locally?
@@gregnr I think it's the Deno? I keep getting errors like these "Type error: Cannot find module 'common-tags' or its corresponding type declarations." even if I've installed them. =/
nvm I got it!@@gregnr
@@tamsssss6765 got it - just to confirm, are you getting those errors at runtime, or just in your editor (ie. VS Code)? If it's in VS Code, can you double check you have the Deno extension installed? Without that extension, VS Code doesn't handle Deno dependency management correctly.
This is great. What changes would need to be made to use this with an open source model like Mistral or Llama 2? Is it just whatever model library is used and the embedding model that goes with it?
Hey did you get any further with this? I'm building a similar model using Mistral 7B - would really like to hear how you went about with using a local llm
Great stuff, thanks a lot!
One question. What's the point of deploying Deno edge functions (and calling them with pg_net inside postgres) instead of simply using Next.js actions for processing files after upload? It adds a lot of complexity imo. Any real benefits?
This video is a treasure 🏆👑🥇🌟💛
Brilliant 🥂
Are you planning on updating this to use the new supabase auth SSR system since that seems to be recommended?
This video is PERFECT
What an excellent video! Amazing work - I love all the "rabbitholes" which are all very important. I have two questions though; instead of using Supabase functions, one could use NextJS Route Handlers, right? Also; are there some open source alternatives to OpenAI LLM that could easily be integrated instead? Thanks for this video!
Good tip. I think it could be done, seems like the edge functions on the free tier time out when doing the calculations.
Would love a video on how to easily migrate supabase ssr w/ this! =D
Hi not-Jon, this looks good. Thanks.
Thanks for watching! Let me know if you hit any road blocks.
I agree! Non-Jon is killing it! 💯
this is so good. Is it easy enough to make it cite sources? Kinda like perplexity or notebooklm
How would you handle this if you actually wanted to reference the document/location where the RAG has pulled the info from (ie. like a references list on the front end)?
Yep this is a great question. We are actually in the process of bringing this type of functionality to the Supabase docs via Supabase AI assistant. The strategy more or less comes down to:
1. During the RAG prompt injection step, prefix each section with a heading (or id, link, storage path, etc) that references the document it came from
2. As part of the initial prompt, ask the LLM to insert references to these respective section headings throughout its response
3. On the frontend, parse the response coming back to extract these references, replace with a [1], [2], [3], etc, and add them as footnotes
Sweet that makes a lot of sense. I pulled something similar together using pinecone but found I was double handling a lot of the prompt injection and then parsing the references. The way you have described it within the Supabase framework makes a lot of sense.@@gregnr
Amazing tutorial! Could you make a similar tutorial but for using supabase with AI agenst (+ RAG) that use function calling. For example, how to create a chatbot that can add tasks to our to do list or complete tasks on our todo list.
Great tutorial! Do you start running into problems with chat conversations as time goes on ... given you are including all previous messages and the limited window that OpenAI provides? How do you handle that? Just truncate it?
ctrl shift i just blew my mind
sir its giving error service not healthy: [supabase_vector_chatgpt-your-files]
One thing when reseting the DB because of Todos, there is a directive how to do it locally but not via the cloud. `pnpx supabase db reset` doesn't work unfortunately and I can't find it in the docs.
Getting this as well... did you figure it out?
Edit: actually, here is what I did:
1. npx supabase db reset --linked
2. Deleted 'files' from storage in cloud.
3. npx supabase db push.
Both of my migrations (the files and documents) were applied.
@@sumodd sorry I accidently replied to a wrong video on another issue 🤦🏻Actually the wrong was on my side, since db reset is for the docker, I think you just need to do db push
When are you guys going to add rate limiting. Exposing anon key to the public is still not production ready.
can i check for the generate embeddings part why we need to remove the javascript elements from markdown thanks!
I want to integrate supabase with my flutterflow app but the problem I'm facing is that I dose not allow to present user dispaly name . Any solution ?
interesting video!. so the whole reason for using RAG here is to minimize the token inputs when eventually passing it to GPT? (also maybe getting more accurate results because of using a specific embedding model that's better than GPT)
Is there an updated version of this? I feel like 8 months ago was a loooooong time!
The underlining concept remains the same, but the LLM models, functions etc.. are updated on Github.
I think something has broken with the repo. The Chat function for example no longer deploys (i have pinpointed it to the AI library import from Vercel) Can you or anyone else reproduce this?
Need more detail documents on Ollama please.
Very good tutorial. Only problem I have is that I don't get embeddings generated for every item in the documents_sections. I followed the code to the letter and it only generates the first 5 embeddings.
Thanks for making this video for my favorite platform. I have followed it along and ported this method to use Google gemini api but I am having a weird problem. In chat function in the part where we add injectedDocuments to the system prompt. It runs fine for the first time, i checked by console.logging the completionMessages and all injected Documents and system prompt is added to the first prompt by the user but for all later messages neither the system prompt nor the injected documents are added and are not added to the user messages. However strangely enough the output from the gemini pro clearly indicates that it is getting the context and gives spot on replies. (I compared with the output from gemini pro without any context and answers were way different). Can somebody guide me if this is default behavior of vercel's ai sdk or there is problem with my code?
First, thank you!
One question, how does one go about debugging the functions defined as database functions?
SupaTutorial!
Amazing video! Thank you. I have a question: best way to set up multiple supabase projects locally
using docker?
That would be through using Supabase CLI. supabase.com/docs/guides/cli/local-development
Hey, i am beginner i received api key and base url generated by my organization , this tutorial just include API key only directly by open ai , I need tutorial which can help me to create the chat bot with my API key and base URL can anyone suggest me tutorial or code base ?
In the embbeding column 7 - 10 rows are empty, but the rest is filled. Why is this? Has nothing to do with the code, since the nummer of blank space is different each time.
I followed the entire tutorial, very good, thank you for this. I am a beginner with nextjs and supabase, two things that I can't get working at the end: my supabase does not create the sections and therefore embeddings when a file is uploaded, I guess I missed something with either the migration or edge function?
also, the chat doesn't work because CORS blocks it when its coming from the supabase cloud, how do I configure CORS on the cloud dashboard?
same
I ran into this as well developing LOCALLY. Make sure to open a new terminal window. Navigate to your project directory and run "npx supabase functions serve".
if my data is confidentiel will i have t use gpt even so
Please guide me to create a logic to upload excel and pdf files
Couldn't a lot of these edge functions just be handled by API routes since you're using Next?
Great tutorial but if you actually deploy this to supabase the CPU time is SO restrictive the embedding pipeline doesn't work. 🤦♂
Thank you so much one of the Best Tutorial. Query - when we are using cloud based option means files uploaded to supabase server and embeddings as well. Just want to confirm how secured our documents will be and can we use for financial and health care files as same application. Really interested to signup with Supabase if this query is solved.
Supabase can be HIIPA compliant with certain plans, so it is safe to store those types of information. supabase.com/blog/supabase-soc2-hipaa
Thank you so much for your response. Request to please share direct link or email id where I can ask more queries for any further doubts@@Supabase
can you do this in python?
anyone facing could not Auth user when trying to sign up?
Whitney River
Berenice Garden
Dion Estates
Delphine Cape
Denesik Ford
Looking for a tool to get text from my pdf. Is this possible aswell?
PDFs have notoriously been difficult to pull text from in a sane way (because there's lots of variance between PDFs, and some PDFs embed text while others are just images). One solution we're working on is using GPT's new vision model to extract the text - still WIP right now, but stay tuned!
@@gregnr thought about that aswell. All libraries I tried worked so bad
hallo
hiawd
its good & detailed, but why can't you make this more straightforward to setup
hey, which parts did you find were slow to setup?
@gregnr why cant i just run git clone, then run npm install and insert my env vars and run.. also do you have to use deno ?
@@namesare4fools if you don’t care about the details, you can 100% just clone the repo and run it. Yes, Supabase’s edge runtime is built on Deno - you wouldn’t run this in vanilla Deno yourself though, instead use the supabase CLI to serve the edge function as shown in the video/readme.
worst ever tutorial. so unclear, it cannot be more confusing
Thanks for a detailed video. but why did you use @supabase/auth-helpers instead of @supabase/ssr as the docs recommends.