Thank you for this example!! I just ran through this example using llama3.1 8B model - and it worked flawlessly. llama3 does not work - but the 3.1 model did. I actually did not expect that
Somehow I only get one tool call in my list as an answer. Even if I ask a question that would warrant multiple tool calls as a response. The ollama API is able to return multiple tool calls, openAI as well. I tried several models, including llama3.1, llama3, firefunctionv2 and the groq versions. Could it be your system prompt that prevents returning multiple function calls?
Hello @IdPreferNot1. Apologies as I do not understand. May I trouble you for specific instructions to see the link to the notebook. I am clearly missing something. Thank you for your help.
I'm trying to use it with ollama, but from my other compute in the same network, and can't set the base_url. I'm trying to set it like llm = ChatOllama(model="modelName", base_url="http::11434"...), but it doesn't work.
this is the code at about 3:48. from typing import List from typing_extensions import TypedDict from langchain_ollama import ChatOllama def validate_user(user_id: int, addresses: List) -> bool: """ Validate user using historical addresses. Args: user_id: (int) the user ID. addresses: Previous addresses. """ return True llm = ChatOllama( model="llama3-groq-tool-use", temperature=0, ) # %% llm_with_tool =llm.bind_tools([validate_user]) # %% result = llm_with_tool.invoke( "Could you validate user 123? They previously lived at " "123 Fake St in Boston MA and 234 Pretend Boulevard in " "Houston TX." ) result.tool_calls
It is good, but it is still not there. I did several tests where i give it two dummy tools to use, and it is able to distinguish quite effectively - however it will always call the tools, even when asked not too. Tried different prompts, no good. Still, it is better than it was, and the package is nice :)
Hello Lance. Great presentation. Looking everywhere for your jupyter notebook. You introduce so many new concepts in your tutorials that it is almost impossible to reproduce visually from the video. I see the version you used in the video remained untitled through the end. Will you be posting the notebook in github examples like you have in the past? Your work is amazing and valuable and we are scrambling to catch up!
3:32 I get an empty array when I run the exact same code, can you help me here? My langchain-ollama package version is 0.1.1 and I have tried both llama3-groq fine-tuned model and llama3.1
from typing import List from langchain_ollama import ChatOllama from typing_extensions import TypedDict def validate_user(user_id: int, addresses: List) -> bool: """Validate user using historical addresses. Args: user_id: (int) the user ID. addresses: Previous addresses. """ return True llm = ChatOllama( model="llama3-groq-tool-use", temperature=0, ).bind_tools([validate_user]) result = llm.invoke( "Could you validate user 123? They previously lived at " "123 Fake St in Boston MA and 234 Pretend Boulevard in " "Houston TX." ) result.tool_calls [ ]
Nice! Love it! I was looking for something like this today! So glad I decided to catch up on my langchain videos! hehe Cheers!
Thank you for this example!! I just ran through this example using llama3.1 8B model - and it worked flawlessly. llama3 does not work - but the 3.1 model did. I actually did not expect that
Somehow I only get one tool call in my list as an answer. Even if I ask a question that would warrant multiple tool calls as a response. The ollama API is able to return multiple tool calls, openAI as well.
I tried several models, including llama3.1, llama3, firefunctionv2 and the groq versions.
Could it be your system prompt that prevents returning multiple function calls?
very very clearly explained. Thanks.
This is THE content! Please take it to the top ---> source code link for longer script?
Hello @IdPreferNot1. Apologies as I do not understand. May I trouble you for specific instructions to see the link to the notebook. I am clearly missing something. Thank you for your help.
@@JDWilsonJr Im saying its great content.. he'd make it better if he shared the source code he went through. :)
@@IdPreferNot1 Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb
Still killing it
I'm trying to use it with ollama, but from my other compute in the same network, and can't set the base_url.
I'm trying to set it like llm = ChatOllama(model="modelName", base_url="http::11434"...), but it doesn't work.
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
raise NotImplementedError()
ollama bind tool says, not implemented.
use from langchain_ollama import ChatOllama not with the one with community models
Would be neat of NO proprietary/paid tools are used (e.g. for embedding or websearch). But, of course, no big deal to to this ourselves. Thank you
Lopez Linda Anderson Carol Martinez Jose
response = ChatOllama(
^^^^^^^^^^^
TypeError: 'method' object is not subscriptable
this is the code at about 3:48.
from typing import List
from typing_extensions import TypedDict
from langchain_ollama import ChatOllama
def validate_user(user_id: int, addresses: List) -> bool:
"""
Validate user using historical addresses.
Args:
user_id: (int) the user ID.
addresses: Previous addresses.
"""
return True
llm = ChatOllama(
model="llama3-groq-tool-use",
temperature=0,
)
# %%
llm_with_tool =llm.bind_tools([validate_user])
# %%
result = llm_with_tool.invoke(
"Could you validate user 123? They previously lived at "
"123 Fake St in Boston MA and 234 Pretend Boulevard in "
"Houston TX."
)
result.tool_calls
excellent work, as usually..:)
can you share the code link.
Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb
It is good, but it is still not there. I did several tests where i give it two dummy tools to use, and it is able to distinguish quite effectively - however it will always call the tools, even when asked not too. Tried different prompts, no good. Still, it is better than it was, and the package is nice :)
Great video!
Is this also available for Node?
Awesome!!!
could you please share a notebook link? thanks for making these videos
Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb
Hello Lance. Great presentation. Looking everywhere for your jupyter notebook. You introduce so many new concepts in your tutorials that it is almost impossible to reproduce visually from the video. I see the version you used in the video remained untitled through the end. Will you be posting the notebook in github examples like you have in the past? Your work is amazing and valuable and we are scrambling to catch up!
Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb
@@LangChain Sooo appreciate your response and the link. Keep up the great work.
3:32 I get an empty array when I run the exact same code, can you help me here? My langchain-ollama package version is 0.1.1 and I have tried both llama3-groq fine-tuned model and llama3.1
Yes, I have encountered the same problem!!! I'm puzzled for half a day...
@@BushengZhang I am still not able to figure out the reason, I have checked with github issues also. Not sure if it's a bug or something else.
Oh, I have just find a solution, I changed ollamafunction to structuralize LLM's outputs, and it workded
@@BushengZhang Oh, great. Can you please share the example code?
Great content! Please, share the code 😃
Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb
@@LangChain Thanks !
why i use the same code but return [ ], the empty list?
from typing import List
from langchain_ollama import ChatOllama
from typing_extensions import TypedDict
def validate_user(user_id: int, addresses: List) -> bool:
"""Validate user using historical addresses.
Args:
user_id: (int) the user ID.
addresses: Previous addresses.
"""
return True
llm = ChatOllama(
model="llama3-groq-tool-use",
temperature=0,
).bind_tools([validate_user])
result = llm.invoke(
"Could you validate user 123? They previously lived at "
"123 Fake St in Boston MA and 234 Pretend Boulevard in "
"Houston TX."
)
result.tool_calls
[ ]
@@hor1zonLin were you able to figure out and fix the issue?
@@kuruphaasanhor1zonLin I have the same problem!