Why do you think theres no other open source model trying this? Since its a tone output difference, is it more than a fine tuned model? Run in to any limits on the number of functions?
In addition to using the JSON output in a function, you should validate the JSON to ensure it conforms to your schema. This allows you to identify and correct any data issues before the function processes it. - I’m sorry; I just realized this video is 8 months old. 😅
It similar, but just returns to you a definition for function, and whether you call it or not - you decide, agents typically fire functions automatically and in my experience are less reliable.
Nice video. What if I have 1000s of functions ? It is not really efficient to give the descriptions to the LLM for every queries… how would you proceed in that case ?
Forget about using the context. Fine tune a model with the functions documentation. Or maybe not, cause sometimes I have no idea what I am talking about, I am an padawan in this field still. 😅
This json outputs we can directly get from api as well right, whats the use of llm if it doesn't generate human like responses? Can u please make the llm generate human like response using the weather api?
I tried the Google Collab and the model works great for pure function calling, but unlike the openAI models it cannot respond with anything BUT a function call making it much less practical for a lot of use cases. Am I missing something? The strange thing is that the examples have function calling set to "auto" yet they always respond as if function calling is set to always.
That is a thing I was thinking about also. Because many products still need the full power of "understanding" larger contexts that OpenAI and other paid models provide. Of course, I could see many uses for this model in "simpler" and more direct user interactions, what could potentially lower the costs of the product as a whole.
3:18 "..it can call a weather API and then get the API response back and give it to you" That's not how function calling work. YOU and solely YOU make the API call. openAI does not make any API calls. It generates a JSON object that contains information YOU can use to call the weather API. As a matter of fact the page you have opened and you highlight text on says: "The Chat Completions API does not call the function; instead, the model generates JSON that you can use to call the function in your code". That'd be nice if you pay attention to what you're saying, this is misleading and factually wrong. Thank you.
Learn more about OpenAI Function Calling (hands-on) ua-cam.com/video/PPsptBoHIiE/v-deo.html
Would it be possible to ask for follow up questions for required parameters?
Thanks, this is the video I have been looking for, for ages! Great explaination!
Thank you very much, the model got released yesterday :)
They fired Sam Altman. I am uncomfortable.
me too, he was the one holding back how crazy things can get lol
@@artificial-ryan I don't really know if that is true, but it still makes me uncomfortable
Same
I’m glad you clarified the definition of a human “like you and me” 😂
..
'''
How to serve this model locally? There is no documentation on it...
Airoboros model does well with function calling , you can get the dataset and extract the function calling data and fine-tune your model. Easy
Please make a detailed video showing how to use this and all
Is this similar to pydantic but for functions? How does it differ from pydantic
Why do you think theres no other open source model trying this? Since its a tone output difference, is it more than a fine tuned model? Run in to any limits on the number of functions?
In addition to using the JSON output in a function, you should validate the JSON to ensure it conforms to your schema. This allows you to identify and correct any data issues before the function processes it. - I’m sorry; I just realized this video is 8 months old. 😅
Why there is a sound of goat in intro?
How is function calling different from "tools" we give to agents?
It similar, but just returns to you a definition for function, and whether you call it or not - you decide, agents typically fire functions automatically and in my experience are less reliable.
Can you show how to run this with Autogen?
What in the world is there a model on huggingface for if I need to call Berkley and get their permission every time I use it?
You don't need to call the Berkley thing, they're using it as an API. You could very well host the model wherever you want and use it!
@@1littlecoder Oh okay. I am just getting used to colab, so I was confused when I looked through the notebook at first. Thanks a bunch for clarifying!
Thank you very much for this interesting video.
However, I'm wondering if it's normal that the run of the function took a lot of time?
Nice video. What if I have 1000s of functions ? It is not really efficient to give the descriptions to the LLM for every queries… how would you proceed in that case ?
Forget about using the context. Fine tune a model with the functions documentation.
Or maybe not, cause sometimes I have no idea what I am talking about, I am an padawan in this field still.
😅
This json outputs we can directly get from api as well right, whats the use of llm if it doesn't generate human like responses? Can u please make the llm generate human like response using the weather api?
hi may i know the software you use in this video as canvas? thank you
I tried the Google Collab and the model works great for pure function calling, but unlike the openAI models it cannot respond with anything BUT a function call making it much less practical for a lot of use cases. Am I missing something? The strange thing is that the examples have function calling set to "auto" yet they always respond as if function calling is set to always.
That is a thing I was thinking about also. Because many products still need the full power of "understanding" larger contexts that OpenAI and other paid models provide.
Of course, I could see many uses for this model in "simpler" and more direct user interactions, what could potentially lower the costs of the product as a whole.
Is the a GGUF version avail? Is it possible to use with ollama?
As of the recording time it wasn't, I'll wait for a couple of days to check the same
@@1littlecoder I tested Zephyr 7B and it is quite good to get structured data from LLM
Thank you!
interesting, review my bookmarks few days ago, saw that project in it and it was looking dead. Didn't expect a new release
Did they ask GPT-4's advice to fire Sam you think? 🙂
Help us escape OpenAI!
can function calling work in software's like AFTER EFFETCS? Thank you
I know Indian people struggle with "r" but you hearing speaking "parallel" is next level 😅
Was it that bad ?
Can we connect to MongoDB using function calling? Thanks
Do you mean query MongoDb?
yes, thats what I meant sorry@@1littlecoder
Rather presumptive of you to presume I am a human.
The examples are cheesy (don't need yet another way to ask for the weather), mail didn't work on Gradio playground,...
Sorry, will try to do better next time!
very good video
3:18 "..it can call a weather API and then get the API response back and give it to you" That's not how function calling work. YOU and solely YOU make the API call. openAI does not make any API calls. It generates a JSON object that contains information YOU can use to call the weather API. As a matter of fact the page you have opened and you highlight text on says: "The Chat Completions API does not call the function; instead, the model generates JSON that you can use to call the function in your code". That'd be nice if you pay attention to what you're saying, this is misleading and factually wrong. Thank you.
Thanks
Sam altman wtfffff
@1littlecoder Could you give me some help?