Discover the Secrets of Spring AI 1.0, SpringBoot, Java, Ollama/Llama3, API Creation and RAG Basics

Поділитися
Вставка
  • Опубліковано 23 вер 2024
  • Discover the secrets of Spring AI 1.0, SpringBoot, Java, Ollama/Llama3, API Creation, and RAG Basics in this informative video.
    Learn about these key topics to enhance your knowledge in AI the Fast and Simple way to develop Java Enterprise applications using Spring Boot and Spring AI version 1.0 using Ollama LLM running locally using Llama3 model.
    Learn to create an end to end API and the basics of RAG.
    Source code: github.com/Tho...
    Check out my other videos.
    #ai #springboot #java

КОМЕНТАРІ • 17

  • @fastandsimpledevelopment
    @fastandsimpledevelopment  4 місяці тому +1

    Here is THE Video people have been asking about with Sprint AI 1.0, Spring Boot API with local running Ollama with Llama3

  • @DREBO-z8c
    @DREBO-z8c 26 днів тому

    Great video, please post more (especially for spring).

  • @J-India24
    @J-India24 3 місяці тому +2

    Please one video on embeddings, vector, RAG

  • @ajaybiswal1
    @ajaybiswal1 2 місяці тому +1

    Great Video. It will great if u can make video using same tech with Vector DB

    • @fastandsimpledevelopment
      @fastandsimpledevelopment  2 місяці тому

      Thanks for the response, glad you enjoyed the video. Based on your comment I did research VectorDB, my initial concern is the lack of ongoing support from its GitHub, it appears the core is very old and it does not appear to be well maintained. One of the major issues with the functionality is that I do not see how to support distinct collections, so say you have a user and they upload a document (Bob has 3 PDF files) and another user then uploads a document (Sally uploads 5 PDF files) you would normally have something like a shared collection, a Bob collection and a Sally collection so the data is unique which is important for any Enterprise RAG application. I see no way to do this in VectorDB other than a unique instance which does not really work for me. I will continue to watch the project and if it progresses I will make some videos. Thanks for sharing!

    • @ajaybiswal1
      @ajaybiswal1 2 місяці тому +1

      @fastandsimpledevelopment thanks for the response what I meant was some vector dB being involved like chroma, pg vector etc ( mainly open source). Since u are dealing only with raw prompt. So maintaining context with be difficult if I use some paid model by chat gpt. As everything u have to sent the history in the chat which will be very expensive. So involving vector dB will make it cheaper. Anyways thanks for the response. And all the best. Hoping to see more videos from in future

    • @fastandsimpledevelopment
      @fastandsimpledevelopment  2 місяці тому

      @@ajaybiswal1 I do have a few videos using ChromaDB for Rag and also one for Chat History, I don't have anything using MongoDB which is what I use for production systems bug ChromaDB works well since it is based on SqlLite but you do need to turn off telemetry to keep everything private set anonymized_telemetry=False

    • @ajaybiswal1
      @ajaybiswal1 2 місяці тому

      @@fastandsimpledevelopment thanks for this...I will watch this video

  • @businessintelligenceandana5907
    @businessintelligenceandana5907 4 місяці тому +1

    Thanks

  • @marekj3759
    @marekj3759 2 місяці тому

    Bravo :) for my curiosity - is there any option to limit the answers only to provided content? Just to avoid irrelevant questions like "what is the best(...) / who is ()..." etc

  • @J-India24
    @J-India24 3 місяці тому

    Great content, thanks 👍

  • @businessintelligenceandana5907
    @businessintelligenceandana5907 4 місяці тому +2

    Can you also please complete this with chat history

  • @dnyaneshgurav1573
    @dnyaneshgurav1573 10 днів тому +1

    Hello Sir, I am getting below exception
    [404] Not Found - {\"error\":\"model \\\"llama3.1\\\" not found, try pulling it first\"}
    I have installed llama3.1 and configured in in yml file as well like yours but still geting the exception. I did not find solution for this please can you reproduce this issue and explain. Locally I am able to run the llama and getting the response)Via Command Prompt)

  • @minci923
    @minci923 3 місяці тому

    More like this

  • @kappaj01
    @kappaj01 3 місяці тому

    You have Lombok already - @AllArgsConstructor....