249 | Breaking Analysis | From LLMs to SLMs to SAMs, How Agents are Redefining AI

Поділитися
Вставка
  • Опубліковано 15 лис 2024

КОМЕНТАРІ • 9

  • @marka5215
    @marka5215 Місяць тому +7

    Brilliant analysis. Thank you.
    I think the key bottleneck, as you suggest, is in reliability. LLMs are stochastic by design, and they are not reliable in practice. So uses for them need to focus on their strength, which is in language transformation. While they can potentially do reasoning, I have the impression so far that they are not reliable enough to allow businesses to build dynamic agentic architectures on them. Even when you have an orchestration layer, if the agents are not grounded in algorithmic certainty, they're going to wobble at random intervals, making the entire structure subject to cascading error sequences. The bigger the structure, the more impact the cascade will have.
    One way to address that might be to build out a system of triple redundancy, so that agents are built in triples that watch each other's work, and constantly compare it to mid-range tasks, and the overall architecture of the business.
    A second piece might be extensive logging of results with sets of agents constantly watching specific business KPI over the course of time to detect when system (program) changes have good and bad results, along with a tracing capability to determine where and when changes resulted in a decline in the KPI. Using this the Agents might then, potentially, effect repairs on the system to bring the KPI back into alignment with the business goals.
    All of this requires enormous up-front development costs on the part of the business, and the risk is that despite the costs of doing so, the system might never align correctly (ie - the engineers couldn't really make it work synchronistically), and the cascade effect of random wobbling within the agent swarm might cause business operations to lose coherency. In the real world when this happens business leaders adjust by correcting data and/or processes and/or people in the decision chain. In the world of Dynamic Agentic Architectures, I'm not so sure you can repair wobbling subsystems so easily. They're integrated in with the whole, and ongoing decision-making is difficult to stop when you have your production systems based on systemic automation. Instead, you have to tweak the system to try to cajole it into good behavior, but in systems as complex as you're describing, such tweaking could wind up being problematic in so far as it may be difficult to trace cause and effect sequences in a constantly evolving dynamic agentic system.
    So there are definitely challenges ahead for this architectural modality. Not that it won't happen, but I think we're still quite a ways off from seeing a real-world application of these concepts. You have a great conceptual design for how the Agentic Business could operate, but the path from here to there is unclear, and I think you need a few more pieces put into place to make it work as intended. And yes, both the Harmonization and Agentic layers are completely new, and require thoughtful design, considerable testing before they would be capable of fulfilling the goal.
    The real question will be if businesses are willing to take on the cost and risk of building the infrastructure you've outlined. Remember, businesses spent a huge amount of effort on Big Data, Data Warehouses, and related technologies, only to wind up disappointed. This was because the organizational up-front costs were beyond the business leaders to actually work out, despite extensive guidance and effort by IT to get them to do so (at least in the cases I've seen).
    I would say the best chance in the beginning would be to try to prototype this concept with a small, young business that has stellar leadership who fully understand their business model from end-to-end as well as their KPI, do not have massive engrained systemic complexity, and are both willing and able to liaison with the IT staff that builds out the Agentic Architecture. And the primary requirement, which may not work well in our highly accelerated VC driven world, would be patience. Lots and lots of patience. Not sure, therefore, if we could really get from here to there, to be honest. At least not in the short run. Medium and long-term, I do believe it not only can, but will happen. However, I also think there will be plenty of smoldering ruins along the way. Not 100% sure, but... pretty sure.
    Also, another approach might be, and I think you may be alluding to this, to start small by isolating specific business processes that are currently analog+human driven, and insert Agentic Systems into the monitoring and recommendation stream of the business operation. So the limited-scope Agentic System watches the analog system, the KPI, and assesses the efficiency of the operation as its core function. As business conditions and/or the analog environment changes, the Agentic System observes, reports and provides recommendations.... with perhaps the ability to, upon approval, effect changes to the analog system, and/or the Agentic System, to increase efficiency and positive outcomes.
    Anyway, very interesting presentation! Thank you!

  • @johnkintree763
    @johnkintree763 Місяць тому +6

    The most impressive technology I have seen is the Neo4j LLM Knowledge Graph Builder. It could be extended to include text from conversations with users as part of the input from which knowledge and sentiment are extracted, and then merged into a hybrid vector and graph database.
    As the Small Agentic Models (SAMs) improve as interfaces with shared databases, and are optimized to run on smartphones, an open source and decentralized global digital platform can be built to form a kind of collective terrestrial intelligence.
    This could be transformative both for the market and for governance.

    • @ibgib
      @ibgib Місяць тому +1

      Yes exactly, but the devil's in the details in how do you architect that "open source and decentralized global digital platform". You can already consider the existing internet to be this at some level of efficiency (or inefficiency as you please). So the real question is how exactly do you make something more efficient. The vast majority of these "decentralized" approaches are blockchain-based, with two primary exceptions being IPFS (the most mature DAG approach) and Sir Tim Berners-Lee's Solid Pods semantic web. Architecturally the current internet is actually largely based on git, gitops, and ad hoc siloed identity providers interoping via conventional APIs.
      But none of these get to the real issue at hand: how do you minimize **complexity** as the number of entities explodes exponentially. Ultimately this has to be able to communicate between public and private spaces (like git does) and streamline addressing AND versioning. In short, how do you organize space AND time.
      This is the crux of enabling the next UX paradigm shift to enable the mechanics that the speakers are covering in this video. This is how my ibgib protocol differentiates itself, in focusing on reducing global complexity and enabling spacetime versioning details.

  • @brunoaziza
    @brunoaziza Місяць тому

    13:50 Great point on having an end-to-end agentic framework that spans across clouds and applications. As opposed to singular approach through assistants which reinforce data and application silos.

  • @tangobayus
    @tangobayus 5 днів тому

    In many cases, you can get an instant ROI by making a no-code chatbot like GPTs using a website or a collection of url's or documents.
    Small quantized models can run on consumer PC's and do useful work.

  • @clifftanch
    @clifftanch Місяць тому +2

    It’s hard to believe that a shop like yours would not be an AI fanboy from the start.

  • @mulderbm
    @mulderbm Місяць тому

    Pure value this talk 🎉

  • @tabesink
    @tabesink Місяць тому

    Thank you for the great insight!