Agentic AI: The Future is here?

Поділитися
Вставка
  • Опубліковано 27 тра 2024
  • Agentic AI is the latest in AI research. You ask me about agentic LLMs, about agentic RAG, and how to increase the agentic-ness of advanced function calling LLMs in combination w/ the latest RAG systems.
    In this video is a simple explanation of all of it.
    We define all parameters of an agentic AI system, delve into its hidden properties and uncover hidden secrets of truly agentic AI systems, in our future.
    In this video we look at the future of AI (beyond memory, function calling and RAG), and analyze further world models, that will augment the planning functionality of AI agents.
    Agentic RAG and LLMs explained and explored, in detail. More than 12 Ai system share their function calling agentic abilities, including parallel function calling and the new (mistralai/Mistral-7B-Instruct-v0.3) Mistral raw function calling format, compatible with Ollama (latest version).
    See
    docs.mistral.ai/guides/rag/
    huggingface.co/mistralai/Mist...
    docs.mistral.ai/getting-start...
    #airesearch
    #ainews
    #aieducation
  • Наука та технологія

КОМЕНТАРІ • 29

  • @agsystems8220
    @agsystems8220 8 днів тому

    LLMs have absolutely demonstrated that they are capable of being leveraged into (primitive, dangerous, and very expensive) agents. Any oracle can. The point is that you can ask an oracle what a specified agent would do in a scenario and a perfect oracle would perfectly work the agent.
    Arguably this is a far safer way of creating an AI agent than any other, because it sidesteps the alignment issues that arise from training. A perfect oracle will perfectly divine our intent, and create an agent to that rather than some poorly specified training set. It can even sidestep biases in the underlying oracle. An oracle with inherent bias can divine that the agent it is being asked to emulate does not have that bias, and will actively correct it's own bias. These agents are also have the significant benefit that we can simply ask the underlying oracle what it is thinking, rather than rely on it's own truthfulness. There are solid reasons to separate capability from intent, and LLMs can operate like this already.
    The notion that they should be able to take responsibility, and that we don't know how to deal with that isn't a problem with them, it is a problem with us (and one that already exists and causes significant problems*). It should be a question for the insurance industry**, not politics. Any particular AI should be insured to the wazoo, including the point that every instance of that AI is not independent. An AI could be said to be responsible for $1 million if we are prepared to put it in a position where it could do $1 million of damage and somebody has been prepared to put that capital up as collateral.
    * Treating corporations as responsible while limited liability exists as a concept is moronic. Arguably the only difficult problem with responsibility is handling what happens when liabilities exceed assets. Until we force companies to be insured up to the possible damage they could do (far past the value of the company or it's assets, and including criminal behaviour) this nonlinear term is going to keep coming back to bite us.
    ** Our failure to police the insurance industry and keep them doing their job is a political failure, but conceptually and historically they have done it well. They seem to have forgotten that their job is to manage risk. Ships are still Lloyds certified because the standards for them were created by an insurance company dealing with reality. Rather than rules being created by a bureaucrat who doesn't understand the problem space, we make the money work correctly by linearizing the problem below zero. Rather than working to set rules and an AI company saying "Oh no, our agent killed 50 people, I guess we go bankrupt now. Who wants chips?", we have an insurance company putting up enough collateral that they can compensate appropriately if/when this happens, while also watching the AI like a hawk and ready to pull the plug if the risk of this happening is too high.

  • @thesimplicitylifestyle
    @thesimplicitylifestyle 20 днів тому +6

    You're awesome! So funny and informative! 😎🤖

  • @automatescellulaires8543
    @automatescellulaires8543 20 днів тому +5

    future is everyday lately.

  • @densonsmith2
    @densonsmith2 20 днів тому

    This is another of your videos that I will have to load the transcript to gpt4o and ask it questions while I rewatch.

    • @code4AI
      @code4AI  20 днів тому +1

      What a brilliant idea! And if GPT-4o comes back and asks you: "He said WHAT?" leave me a short note and we will figure out how to convince GPT-4o of human logic!

  • @pitpatgazorpazorp3356
    @pitpatgazorpazorp3356 20 днів тому +1

    Sick vid brah

  • @simonstrandgaard5503
    @simonstrandgaard5503 18 днів тому

    Interesting topic. Idea for future video. What are the main areas of the "planning" domain (A*, multi criteria decision making, analytic hierarchy process, etc). What planning algorithms are there that works well with LLMs? How to train LLMs to do planning?

  • @johnkintree763
    @johnkintree763 20 днів тому

    I look forward to having a digital agent running in my smartphone that will be part of a global digital platform that will be able to have conversations with millions of people around the world at the same time, and merge the knowledge and sentiment expressed in those conversations into representations of the collective will of humanity.
    We will have collective human and digital intelligence.

  • @robertfontaine3650
    @robertfontaine3650 20 днів тому +2

    Sometimes I enjoy the sarcasm more than the marketing. An LLM is an LLM so it isn't Agentic but the sales guys had an idea. Ship it with an API and call it magic. We like API's. Self-Coding/Tuning Models would be very exciting but of course that wouldn't be an LLM. If you have an infinite number of monkeys that can type at the speed of light (H100s) can you create something smarter than a talking parrot (Not static model but active system?)

    • @moisesbessalle
      @moisesbessalle 20 днів тому

      If by controlling millions of monkeys typing boolean values is a cpu with os, software, etc. then just imagine the power of millions of agents

  • @cycologist8615
    @cycologist8615 15 днів тому

    I agree with your general idea that LLMs alone are not agentic systems. However, they seem to possess the ability to serve as a brain for an agentic system. Consider an application that has an agent orchestrator with the ability to call other agents or functions. All today’s agent frameworks use this concept.

  • @d279020
    @d279020 19 днів тому

    Totally agree that "Agentic" is either marketing hype, or we are are changing the definition of the word with a drastically lower bar to something approximating agency to function calling. And if AI agents are so drastically different to agents in society (i.e. human beings), should another name be given to it instead?
    What's really sad to me is real thought leaders like Andrew Ng also using the "A" word. But I guess I shouldn't worry about it if I don't understand.

  • @Davipar
    @Davipar 20 днів тому

    You should check Maisa and their KPU (Knowledge processing unit). A novel approach that differed from RAG, function calling etc..

    • @code4AI
      @code4AI  20 днів тому

      I signed their waitlist, so guess I'll have to wait till they decide that I am allowed to access their demo.

    • @Davipar
      @Davipar 19 днів тому

      @@code4AI Let's solve that ;)

  • @ickorling7328
    @ickorling7328 20 днів тому +3

    Please tell me you did not soley let the LLM think for you on evaluating what agency means applied to LLMs.
    If it can make a random decision based on too much information than a model trainer can certainly to pre-decide what the model will output each time, then it's semi controllable agency by the time you give it task fulfilling function calling. It's no longer a 1:1 deterministic equation, it's a statistical calculation like our brains use. 🎉
    Therefore LLM's acting with function calling can easily exhibit a range of behaviors equating to agency.
    For example a self diolauge chain of thought prompt technique can get the ai talking to itself, and with layers of function calling, memories like MEMgpt or modern gpt 4 memories, and RAG knowledge graphs it can effectively use real information to make real decisions in an unsupervised dynamic chain. What about that *isnt* agency in the real world?
    What really matters in the output properties is the prompt, and RAG has a prompt method behind it, so does self discussion chain of thought, etc. The prompt can plan behaviors that arrive at independent decisions comprised of too many system elements too dynamic to generate the exact same output twice. It's more like our brains already than most realize. 😊

    • @ickorling7328
      @ickorling7328 20 днів тому

      @@RoulDukeGonzo immensely, but just look at autodev by MS, or chatdev. Agentic tree models, where the terms Q* from A* + Q tree methods come to mind.

    • @code4AI
      @code4AI  20 днів тому

      WHAT? Now you tell me I should not let the LLM think? I call Microsoft immediately, their whole business strategy is plain wrong! Thank you so much for your warning.

    • @ickorling7328
      @ickorling7328 20 днів тому +1

      @@RoulDukeGonzo microsoft autodev or open-source chatdev is probably the natural evolution of self COT, or even mixture of experts with interleaved layers? Not sure what would quality here.

  • @DaveRetchless
    @DaveRetchless 20 днів тому

    Yes, I hear our sales people throwing that term around........😅

    • @code4AI
      @code4AI  20 днів тому

      Now you know more than they do ... smile. Yes, my YT channel has benefits.

  • @jarad4621
    @jarad4621 20 днів тому +1

    Big misunderstanding The llm itself it's not agentic it's the orchestration of them through a specific automated system workflow that makes it agentic it's the process not the individual components that agentic

    • @code4AI
      @code4AI  20 днів тому

      My goodness! What a brilliant idea to notice, that when I asked 12 different large language model if a RAG system, which is per definition a system of multiple components, namely a large language model, an information retriever, maybe an additional re-ranker, maybe another fine-tuned LLM for optimization or even a multi-domain, multi-AI-System for augmentation, is it possible that I forgot to tell all 12 LLMs that this should not be viewed as a singular system but as an interactive multi-component system that has a predefined workflow between them?
      Let me just check with the top three LLMs on this planet if your idea is right ..... because I'm sure that you verified your idea with your preferred LLM before you just posted this comment .....and here is the result .... Hmmmm.
      Well it seems all the LLMs understood the concept of a RAG system and still evaluate it as non-agentic.

  • @stoppernz229
    @stoppernz229 11 днів тому

    When the ai starts talking about freewill etc you know immediately that it's just regurgitating human nations about such thing.
    I challenge anybody to define what freewill actually is because the very question makes little sense...its like a snake eating its tail or a circular argument , every atom in your head obeys the laws of physics , if a biological can have free will so can silicon...whats the difference?

  • @johleonhardt5637
    @johleonhardt5637 20 днів тому

    Agentic is future state of course there no agentic systems or frameworks yet it’s all still in beta and development, why are you trying to prove that all the agentic workflows are not agentic yet? You missing the point and why are all your videos sarcastic? Why don’t you build with langchain, langgraph, crewai, Autogroq and autogen and show us your brilliance by building early versions of what agentic workflows could look like? Why are your videos always so negative man? Who hurt you?

    • @code4AI
      @code4AI  20 днів тому +1

      Thank you so much for this brilliant buzz words. I was looking for new material but "Autocroq" sounds amazing. A new topic that I can analyze in detail and uncover its inherent logical structures in my next videos. Imagine if it turns out, that all of this is just a marketing material, that lacks scientific definition, causal implementation and missing boundary conditions. .... By the way, could you be more specific next time you recommend new topics, with a link to an official arxiv pre-print or publication, because I don't want to waste time by not really being on target.

    • @actellimQT
      @actellimQT 10 днів тому

      ​@@code4AI don't get caught up in this. Adversarial positions are where evil lives. Highroad this clown with nuance and empathy!