I wish it would give some context in the response. Useful when giving a collection of documents. I would like to know which document and perhaps pages it pulled the answer from.
Amazing video! thank you for sharing! Bytes could do a video tutorial on setting up image generation from Dall-E through webUI as well? That would be really helpful.
When you say you’re ‘uploading’ documents are you actually uploading them to a server or is it a loose term used to indicate the openweb ui is getting local access to the docs and keeping it all in a private environment?
@@ceticreporter7182 they are stored in a directory locally. I would have to find out exactly where they are stored. I believe it is outlines in their docs.
Hey mate, this is absolutely awesome, thanks so much. I'm wondering how well this would work at Knowledge Bases? Ie lets say I'm a decent sized company that has a huge internal wiki covering all sorts of internal processes that guides our employees in how to operate the business. Do you think I could upload all our wiki articles to the documents section and then give this to our team to be able to talk to our own internal chatbot to ask questions about what they should do in certain situations? Or do you think uploading very large amounts of documents would sort of overwhelm it and not produce great results? i.e do you think to achieve this outcome you'd actually need to fully retrain the base llama3 model itself rather than just adding documents on top of an existing stock llama3 model? Because as far as I can tell in my laymands understanding of all this is that when you upload documents, each prompt you enter it's having to kinda look through it all for an answer where as if you re-trained the base model it would know it more "naturally" haha. Sorry, not sure if I've explained that very well.
for this architecture you would not need to to retrain the model, since it's using RAG (Retrieval Augmented Generation) to find the answers in the documents. I will say this solution is really for users to run it locally on their computer, since the document store is running in a container on the users computer. This solution would not probably scale well for multiple users across multiple computers. You would want a more enterprise or custom solution if you are going to run this for your business with multiple users on multiple computers.
By default the HuggingFace sentence transformer model is used (sentence-transformers/all-MiniLM-L6-v2) for embedding. You can change it to use whatever model you would like by going to your "Document Settings".
PART 1 can be found here -> ULTIMATE Llama 3 UI: Dive into Open WebUI & Ollama! ua-cam.com/video/D4H5hMMoZ28/v-deo.html
i install open web ui with ollama model but i have not the icon document in the lift side ?
I like the clear presentation a lot. Thanks.
I wish it would give some context in the response. Useful when giving a collection of documents. I would like to know which document and perhaps pages it pulled the answer from.
I dont have the Prompts or Documents section installed after instelling Ollama and Docker. How to implement those sections please?
Amazing video! thank you for sharing! Bytes could do a video tutorial on setting up image generation from Dall-E through webUI as well? That would be really helpful.
Glad you liked the video. I'll look into creating a video based on your suggestion.
When you say you’re ‘uploading’ documents are you actually uploading them to a server or is it a loose term used to indicate the openweb ui is getting local access to the docs and keeping it all in a private environment?
The document is being uploaded to your local Open WebUi server. No documents leave your computer.
@@AIDevBytes do you know where those uploaded documents are stored? I don't want to bloat my storage but I can't find the upload directory.
@@ceticreporter7182 they are stored in a directory locally. I would have to find out exactly where they are stored. I believe it is outlines in their docs.
Brilliant thanks!
how can i use llama parse in open web ui?
what about file excel? can open webUi Olla analyze it?
Hey mate, this is absolutely awesome, thanks so much. I'm wondering how well this would work at Knowledge Bases? Ie lets say I'm a decent sized company that has a huge internal wiki covering all sorts of internal processes that guides our employees in how to operate the business. Do you think I could upload all our wiki articles to the documents section and then give this to our team to be able to talk to our own internal chatbot to ask questions about what they should do in certain situations?
Or do you think uploading very large amounts of documents would sort of overwhelm it and not produce great results? i.e do you think to achieve this outcome you'd actually need to fully retrain the base llama3 model itself rather than just adding documents on top of an existing stock llama3 model? Because as far as I can tell in my laymands understanding of all this is that when you upload documents, each prompt you enter it's having to kinda look through it all for an answer where as if you re-trained the base model it would know it more "naturally" haha. Sorry, not sure if I've explained that very well.
for this architecture you would not need to to retrain the model, since it's using RAG (Retrieval Augmented Generation) to find the answers in the documents.
I will say this solution is really for users to run it locally on their computer, since the document store is running in a container on the users computer. This solution would not probably scale well for multiple users across multiple computers.
You would want a more enterprise or custom solution if you are going to run this for your business with multiple users on multiple computers.
Which embedding model did you use for the document chat?
By default the HuggingFace sentence transformer model is used (sentence-transformers/all-MiniLM-L6-v2) for embedding. You can change it to use whatever model you would like by going to your "Document Settings".
Hi can you provide the specs of your GPU use for this project to get better experience and smooth response.
Check out the video description. I have my Mac book specs below.
@@AIDevBytes Awesome thanks. It will help get idea for a RIG Thanks Man Keep it up!...
Hi, I noticed your job post on Upwork about hiring an editor, and I'm interested in that position. Could we schedule a chat to discuss further?
Please send all proposals through Up work and someone on the team will reply there. Thanks
@@AIDevBytesBoss, I've run out of connects there.
Sorry the individual that handles openings only accepts proposals through that platform unfortunately.
Feel free to send your Upwork username and we can send you a proposal request. You can email here inquiries@aidevbytes.com Thanks