Hi Dave! Thanks for next wonderful material. I really appreciate your work and a way you are explaining things. The best in class IMO. The small technical note - I noticed that there are some noticeable mouth clicks and harsh high-frequency sounds in the recording. It would improve the overall audio quality if you could address these issues. Keep up the wonderful job!
Super interesting. I'm captivated by structured output. It would be great if you had a comparison between methods/frameworks. It seems that Instructor is still the best.
I could be very wrong her but as for using less than or equal within a call and pedantic, wouldn't it work the same as in instructor if you just create a class for that and then use those named variables?, they have an example in the documentation like what I put below. Or am i reading into it wrong? is that a different use case from what you were saying at 20:40? Love the video thankyou ! class Operator(str, Enum): eq = "=" gt = ">" lt = "
Hi, thanks for the video. Can you kindly share some insights on, how I can use the gpt-4o-2024-08-06 model with the response_format command as the LLm in Autogen's assistant agent?
Lekker bezig, Dave! Goeie video weer. Kleine vrijblijvende tip; Je woorden met TH klinken wat 'droog'. (wat bij veel nederlanders zo is.) Een oefening die daarbij goed werkt is om je tong net over je tanden heen uit te steken en terwijl je je randje van je tong tegen alleen je boventanden houd, spreek je op 0,5x speed "this" en "that" en "these" uit, waarbij je je tong tegen je boventanden probeert te houden voor de hele T-sound, de H volgt daardoor automatisch. Elke dag even oefenen & mensen kunnen straks misschien niet eens meer horen dat je nederlands bent! Niet dat mijn Engels perfect is, maar toch zeker minstens zo goed als het jouwe 🤩 Niet dat ik hier kwam om je uitspraak te bashen 😂 maar deze tip heb ik zelf een keer gekregen en dat hielp mij
Can you help me understand when or why I would ever use function calling now? If I get all of the necessary arguments in a structured output, I can make my own conditional logic on what functions to call and which arguments to send
Hi Bro, love your videos, need your advice on how to add memory in sql_agent? conversationbuffermemory is not good enough, Using langgraph can we have a tutorial? Thanks
Great video! I have a question about the API itself. I notice you are using the openAI client, rather than a python requests library call or the like. Is there a reason for this? I have seen some sources online recommend against using the client when possible because of dependency issues, but wondering where you stand on this. Thanks!
Thanks! Either way is fine. The Python OpenAI client does have dependencies, but so has the API interface that gets an update every now and then. Just track your OpenAI pip version in the requirements.txt.
Sir I'm working on my final year project there are 2 main modules in it 1) previous year paper detailed analysis system along with sample paper generation as per trends ,and study roadmap provider 2) notes generation module from textbook content I'm confused what to use where .. whether fine tuned llm , or RAG or anything other ? Can you please explain, it is for engineering students (1st -4th semester, each one has 6 subjects ), there are 7 different branches.
Hey Dave, really nice video ! I was wondering if I could help you with more Quality Editing in your videos and also make a highly engaging Thumbnail and also help you with the overall youtube strategy and growth ! Pls let me know what do you think ?
Have you managed to get this working with batch processing? If you could make a video on structured outputs with batch processing I'd be eternally grateful
Thanks for that! Something got stuck in git. Here's the link: github.com/daveebbelaar/openai-python-tutorial/blob/main/04%20Structured%20Output/04_structured_output.py
Dude, thanks for being about the only person on youtube that goes beyond reading the press release!
yea cause they’re mostly just reporters. A couple of devs but if you know who I’m talking about they’re just there for the hype and dgaf
I love the full transparency and honesty, one of my favorite AI channels!
Was excited to see the blog post, even more excited to see Dave uploaded a video about it!!
Hi Dave! Thanks for next wonderful material. I really appreciate your work and a way you are explaining things. The best in class IMO. The small technical note - I noticed that there are some noticeable mouth clicks and harsh high-frequency sounds in the recording. It would improve the overall audio quality if you could address these issues. Keep up the wonderful job!
Thanks and I will definitely look into this!
Dude THANK YOU for the detailed explanation. I had no idea about pydantic
Thanks!
Thanks!! 🙏🏻
Great coverage and examples. Appreciated the comparison with Instructor. Thanks!
Excellent video, I hope your channel grows to millions of subscribers!
Wonderful explanation, straight to the point, thank you!
Subscribed ✅
Very useful! Thank you for this video!
Thank you! Very clear and helpful.
Love it, worked with Grok as well. cool!
Nice! Will give it a try, definitely. Thanks for sharing this.
You're welcome Catalin!
Great job yet again! Would love a tutorial on using this new release for UI components.
Wow! thanks for sharing man.
Awesome video Dave, thanks a lot!
Great stuff, just what I needed to make my project err a bit less
I love this channel
Killing it Dave!
Thanks Brock! 🙏🏻
Excellent instructiion.
This great and very hands-on
Great content, I'm very curious to see how Instructor will utilize these updates
Super interesting. I'm captivated by structured output. It would be great if you had a comparison between methods/frameworks. It seems that Instructor is still the best.
Simply the best!
Awesome Videos man
What IDE is it that you are using? That's cool that you can step through it like that.
I could be very wrong her but as for using less than or equal within a call and pedantic, wouldn't it work the same as in instructor if you just create a class for that and then use those named variables?, they have an example in the documentation like what I put below. Or am i reading into it wrong? is that a different use case from what you were saying at 20:40? Love the video thankyou !
class Operator(str, Enum):
eq = "="
gt = ">"
lt = "
Hi, thanks for the video. Can you kindly share some insights on, how I can use the gpt-4o-2024-08-06 model with the response_format command as the LLm in Autogen's assistant agent?
Full machine learning project please 🙏
Lekker bezig, Dave! Goeie video weer.
Kleine vrijblijvende tip; Je woorden met TH klinken wat 'droog'. (wat bij veel nederlanders zo is.) Een oefening die daarbij goed werkt is om je tong net over je tanden heen uit te steken en terwijl je je randje van je tong tegen alleen je boventanden houd, spreek je op 0,5x speed "this" en "that" en "these" uit, waarbij je je tong tegen je boventanden probeert te houden voor de hele T-sound, de H volgt daardoor automatisch. Elke dag even oefenen & mensen kunnen straks misschien niet eens meer horen dat je nederlands bent!
Niet dat mijn Engels perfect is, maar toch zeker minstens zo goed als het jouwe 🤩
Niet dat ik hier kwam om je uitspraak te bashen 😂 maar deze tip heb ik zelf een keer gekregen en dat hielp mij
Can you help me understand when or why I would ever use function calling now?
If I get all of the necessary arguments in a structured output, I can make my own conditional logic on what functions to call and which arguments to send
Haven't been able to make it work consistently using 4o-mini. Need to resubmit the response several times for it to work.
Hi Bro, love your videos, need your advice on how to add memory in sql_agent? conversationbuffermemory is not good enough, Using langgraph can we have a tutorial? Thanks
Is possible to maintain the context with assistants and still using structured outputs Vía API?
For several interactions with an assistant
Can we use these new updated with RUNS also where we are using multiple function calls?
Great video! I have a question about the API itself. I notice you are using the openAI client, rather than a python requests library call or the like. Is there a reason for this? I have seen some sources online recommend against using the client when possible because of dependency issues, but wondering where you stand on this. Thanks!
Thanks! Either way is fine. The Python OpenAI client does have dependencies, but so has the API interface that gets an update every now and then. Just track your OpenAI pip version in the requirements.txt.
This JSON scema thing is new to me? Did you write this manually?
Isn't this difficult to learn?
You can generate that JSON schema using Pydantic. He talks about that at 16:00
@@HerroEverynyan Got it, Thanks
Sir I'm working on my final year project there are 2 main modules in it
1) previous year paper detailed analysis system along with sample paper generation as per trends ,and study roadmap provider
2) notes generation module from textbook content
I'm confused what to use where .. whether fine tuned llm , or RAG or anything other ?
Can you please explain, it is for engineering students (1st -4th semester, each one has 6 subjects ), there are 7 different branches.
Vercel has already updated their AI SDK to support this. Example:
const { object } = await generateObject({
model: openai('gpt-4o-2024-08-06', {
structuredOutputs: true,
}),
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({
name: z.string(),
amount: z.string(),
}),
),
steps: z.array(z.string()),
}),
}),
prompt: 'Generate a lasagna recipe.',
});
How to use json schema with Openai Assistants? They don't seem to support it natively
Hey Dave, really nice video ! I was wondering if I could help you with more Quality Editing in your videos and also make a highly engaging Thumbnail and also help you with the overall youtube strategy and growth ! Pls let me know what do you think ?
Have you managed to get this working with batch processing?
If you could make a video on structured outputs with batch processing I'd be eternally grateful
Hey Dave, was just trying it, but with Azure OpenAI... do you know if it works with AzureOpenAI or just with the direct client from OpenAI?
The new model and API will probably be available in the next days (this is usually what happens with new releases)
How to use those with LCEL
gr8!
Error occurred: 'ChatCompletion' object is not subscriptable
How can this be done with Langchain?
I think it can't yet. Have a feeling is working only by using Open AI API directly
I'm not showing the 04_structured_output.py in the repo just fyi.
Thanks for that! Something got stuck in git. Here's the link: github.com/daveebbelaar/openai-python-tutorial/blob/main/04%20Structured%20Output/04_structured_output.py
Hey! Has anyone came up with this error message?
AttributeError: 'Beta' object has no attribute 'chat'
I'm using python 3.12.3 and openai 1.40.5
Can someone explain to me like a kid. (honest qs)
What's the purpose of openai structure output?
Is it just me or there's something in the audio that irritates my eardrums? 🤔
Ya he needs to eq and compress his audio channel - the high end is accentuating his s’s and high end noise
Great stuff, just what I needed to make my project err a bit less