I love that you're pointing out this exists. I think it would be even cooler to focus on your take and what it means. What are the implications in your expert opinion? You don't have to be right. I just want to hear what you think.
I believe this MCP tries to provide an unified way to define context and tools for LLM usage. So far, devs started working on their own tools and tool definitions and provide those to LLM. One thing that you find down this way is that its difficult to control how LLM will use this tools and its also difficult to standardize tool outputs so LLMs can use the tools effectively. Also you need to define access to the tools in an agent by agent basis, so an standardize discovery method can be also beneficial (but comes to mind how we are going to implement the permissioning). With this vendors could possibly better control how their tools are published to LLMs and LLMS can also benefit as tool usage fine tuning can be achieved for popular vendor tools.
In simple terms, MCP is a standardized way to connect AI models (like Claude) with different data sources - imagine it as a universal translator that lets AI assistants talk to various systems like databases, code repositories, or document storage.
Anyone else having issues installing on Windows? Done everything right but cannot get Claude to attach to MCP, even though it’s all there in Dev settings. Help!
Sounds like a cross between automation tools like n8n and assistants where the assistants can access a variety of data sources without needing custom API integrations
It's basically a purpose built.vector rag type solution, as opposed to using those for the same purpose with less accuracy. I'll be trying it, should be easy to get going
So the core concept is those "old" tool callings of moderns LLMs, but in a weird new standard. OpenAI did those ChatGPT plugins on top of that, then after changed to function calling inside Custom GPTs. Langchain and the community built a lot of tools based on that too. Context Protocol doesn't seem to be the proper name.
Some info on Anthropocene if interested; Anthropic builds frontier AI models backed by uncompromising integrity. With accessibility via AWS and GCP, SOC 2 Type II certification, and HIPAA compliance options, Claude adheres to the security practices your enterprise demands. www.anthropic.com/claude#:~:text=Anthropic%20builds%20frontier%20AI%20models%20backed%20by%20uncompromising%20integrity.&text=With%20accessibility%20via%20AWS%20and,security%20practices%20your%20enterprise%20demands.
I am really interested in this because RAG but I doubt what they are saying I have not implemented this in low level code but this seems like another "standard" API that many companies have proposed and have failed to catch on. It's "1 API to rule them all" but it's Anthropic (18 billion) vs Google, Microsoft, Apple, etc, (∞ billion) where Anthropic is telling them what to do. This has never worked and will never work because of something called a "technology moat". The only company that has countered this is Zapier, but only because they wrote 10,000 integrations with different data providers. Please discuss this with me if you are in the know.
Implementing MCP makes it easier for Anthropic to implement RAG / search but makes it much harder for other companies. Other companies will not implement an API for you... They will only implement things that make them money. This will not make them money when they can spin up an Ollama instance and implement decent LLM search support. The "moat" / "walled garden" approach will always make more money.
@@AlanJames1987strong disagree this is a simple open source protocol+ library, other companies would be crazy not to get involved so they can persuade its direction ai companies know they dont have a moat, what matters is the integrations, taking deployable intelligence and giving it access to things so it can do something useful MCP is built on top of standard inter process communication (stdio) and async messaging http (server side events), the protocol is simple and already based on "standards" like conversation messages and tool calling which OpenAI started and now every LLM API uses so yeah, i imagine this will be adopted.
I guess time will tell. I've seen this play out 6-8 other times in my 20 year career as a programmer. But I wish this the best and think this will be a success for the first time ever. Oh yeah, unrelated, please look up "XKCD Standards". It's not related to this at all
I love that you're pointing out this exists. I think it would be even cooler to focus on your take and what it means. What are the implications in your expert opinion? You don't have to be right. I just want to hear what you think.
Hey thanks for the comment - I appreciate that. I will keep that in mind in future videos.
I’ve watched 3 videos, nobody can basically explain what these MCP things do basically
I believe this MCP tries to provide an unified way to define context and tools for LLM usage. So far, devs started working on their own tools and tool definitions and provide those to LLM. One thing that you find down this way is that its difficult to control how LLM will use this tools and its also difficult to standardize tool outputs so LLMs can use the tools effectively. Also you need to define access to the tools in an agent by agent basis, so an standardize discovery method can be also beneficial (but comes to mind how we are going to implement the permissioning). With this vendors could possibly better control how their tools are published to LLMs and LLMS can also benefit as tool usage fine tuning can be achieved for popular vendor tools.
You can talk to your LLM about your files
In simple terms, MCP is a standardized way to connect AI models (like Claude) with different data sources - imagine it as a universal translator that lets AI assistants talk to various systems like databases, code repositories, or document storage.
MCP = API
it will improve the LLM response quality by providing LLM ability to list and perform requests, instead of depending entirely on RAG.
Anyone else having issues installing on Windows? Done everything right but cannot get Claude to attach to MCP, even though it’s all there in Dev settings. Help!
Sounds like a cross between automation tools like n8n and assistants where the assistants can access a variety of data sources without needing custom API integrations
Are going to show us how to use this anytime this year ?
How are you using tool calling in claude web?
I believe that would have been Claude for Deaktop
@@jnevercast is that a mac only feature
It's basically a purpose built.vector rag type solution, as opposed to using those for the same purpose with less accuracy.
I'll be trying it, should be easy to get going
Is there a RFC?
What is the value here? Reading out the blog post?
So the core concept is those "old" tool callings of moderns LLMs, but in a weird new standard.
OpenAI did those ChatGPT plugins on top of that, then after changed to function calling inside Custom GPTs.
Langchain and the community built a lot of tools based on that too.
Context Protocol doesn't seem to be the proper name.
Excellent! Keep up the great communication! Teaching….
Thank you! 🙏
how secure is the data that the AI is reading?
Some info on Anthropocene if interested;
Anthropic builds frontier AI models backed by uncompromising integrity. With accessibility via AWS and GCP, SOC 2 Type II certification, and HIPAA compliance options, Claude adheres to the security practices your enterprise demands.
www.anthropic.com/claude#:~:text=Anthropic%20builds%20frontier%20AI%20models%20backed%20by%20uncompromising%20integrity.&text=With%20accessibility%20via%20AWS%20and,security%20practices%20your%20enterprise%20demands.
I am really interested in this because RAG but I doubt what they are saying
I have not implemented this in low level code but this seems like another "standard" API that many companies have proposed and have failed to catch on. It's "1 API to rule them all" but it's Anthropic (18 billion) vs Google, Microsoft, Apple, etc, (∞ billion) where Anthropic is telling them what to do. This has never worked and will never work because of something called a "technology moat".
The only company that has countered this is Zapier, but only because they wrote 10,000 integrations with different data providers.
Please discuss this with me if you are in the know.
Implementing MCP makes it easier for Anthropic to implement RAG / search but makes it much harder for other companies. Other companies will not implement an API for you... They will only implement things that make them money. This will not make them money when they can spin up an Ollama instance and implement decent LLM search support. The "moat" / "walled garden" approach will always make more money.
@@AlanJames1987strong disagree
this is a simple open source protocol+ library, other companies would be crazy not to get involved so they can persuade its direction
ai companies know they dont have a moat, what matters is the integrations, taking deployable intelligence and giving it access to things so it can do something useful
MCP is built on top of standard inter process communication (stdio) and async messaging http (server side events), the protocol is simple and already based on "standards" like conversation messages and tool calling which OpenAI started and now every LLM API uses
so yeah, i imagine this will be adopted.
I guess time will tell. I've seen this play out 6-8 other times in my 20 year career as a programmer. But I wish this the best and think this will be a success for the first time ever. Oh yeah, unrelated, please look up "XKCD Standards". It's not related to this at all
maalesef çalışmıyor yada ben yapamadım.
Brilliant
The Audio of this Vid is Fake, the speaker doesn't inhale or exhale while speaking
This is my voice lol
You’ve never heard of video editing before? 😂
😂
Still don't understand what this shit is and how you can use it for something else than Claude