I took a go at getting the autogen-magentic-one repo to work with llama 3.2 vision (11B) and had some decent results: ua-cam.com/video/-WqHY3uE_K0/v-deo.html
A search in UA-cam for magentic one auto corrects to magnetic one, only one on topic video loads. Dunno if adding a magnetic tag would help, might do until UA-cam catches up to it as a topic at least
Nice video keep these up. My only input is at the start maybe show or explain what you're gonna do and if you can how what you're about to do compares to other well known alternatives (in this example other agentic frameworks) to give us the value proposition of why to watch all the way through. But I did and it was fantastic! Tnx sub'd
Thanks for the feedback and the sub! I am still working on refining this video style to make it more consistent and I will take your suggestions into account while doing that :)
Thank you very much for your video. In the orchestrator module, an error occurred while executing the code "ledger dict: Dict [str, Any]=json. loads (ledger str)". The error message is as follows: "json. decoder. JSONDecodeError: Expecting value: line 1 column 1 (char 0)". May I ask what is the reason for this?
It sounds like empty or not a valid json was returned by the model. The orchestrator shows the ledger results in json formatting iirc so if the model does not give a response with the expected syntax this can happen. While not with GPT4o, when using this repo with a different model I noticed that sometimes I would get json formatting errors but it would keep going as the model would sometimes return correct json and other times not. I did not have this issue with gpt4o, but I suppose it is possible. If you are having this issue every time you run, I would maybe take a look in the orchestrator script and see if you can print the returned result to see if the error lies in the formatting of the response being incorrect or something else.
Hey, sorry for bothering was wondering if you'd ever share your thoughts on the MSI 40-inch monitor. Would love to hear long term thoughts on it, would really value your opinion.
Of course. Truth be told I am lukewarm on it. I had seen some other folks speak of the text not appearing as clear on some other monitors and I have noticed that myself as well. It is perhaps a setting or something but I keep it in "eco" mode as it is easiest on the eyes and haven't bothered to change any other settings as it works fine even with the weird text. It seems to not like powering back on after being off for a while, the one time I have had it off since purchasing it it had a weird effect on the corners until it "warmed up" I am not sure if this is normal or not but since it looks fine while on I haven't bothered to investigate further. In terms of productivity, it has been a big improvement over the old 27" 2440 monitors I had been using. The extra 500px on each side really makes things easier in terms of multi tabbing, etc. I was never someone who liked having multiple apps open at the same time and proffered having any single app full screen, which meant I had to switch back and forth a lot. This has totally changed that, and I in fact never have any windows full screened on this thing anymore at all. For productivity and not gaming this is definitely a large improvement to my workflow at least, and I would imagine it would be the same for someone else's as well.
@@OminousIndustries Thank you so much for sharing your thoughts. A lot of what you've said aligned with what I was hoping i'd gain from it. I've always been a one thing max on a screen guy (ended up with a quad setup recently). So, I hope that this ultrawide can change that in me. I do game with friends sometimes, but I hope the experience is good enough in game to tied me over until more mature 38-40in oled options enter the market.
Yes, but you would have to modify the code to point to groq instead of openai. As for the issue, I would check the issues tab in the repo to see if anyone else has dealt with that.
@@OminousIndustries thanks for getting back to me i ended up figuring it out, i was watching your ollama video on how to run this with the local llms i got it all set up, edit: i got that working too grate videos bro and thanks for uploading your ollama to github
I took a go at getting the autogen-magentic-one repo to work with llama 3.2 vision (11B) and had some decent results: ua-cam.com/video/-WqHY3uE_K0/v-deo.html
enjoy the way you leaned into the YT ai meta with the over-the-top video title :D very interesting magentic demo too!
LOL yeah, I had to do it for the algorithm.... Thanks!
A search in UA-cam for magentic one auto corrects to magnetic one, only one on topic video loads. Dunno if adding a magnetic tag would help, might do until UA-cam catches up to it as a topic at least
Good point, thanks for that. I will add one into the description!
Nice video keep these up. My only input is at the start maybe show or explain what you're gonna do and if you can how what you're about to do compares to other well known alternatives (in this example other agentic frameworks) to give us the value proposition of why to watch all the way through. But I did and it was fantastic! Tnx sub'd
Thanks for the feedback and the sub! I am still working on refining this video style to make it more consistent and I will take your suggestions into account while doing that :)
Thanks for this. I didn't know this existed so...
Of course! I had learned about it recently myself.
Great video. Please show tell us how to connect to Ollama!!
Thank You! I plan to do a follow-up video on Ollama integration soon!
Here is the ollama update: ua-cam.com/video/-WqHY3uE_K0/v-deo.html
Thank you very much for your video. In the orchestrator module, an error occurred while executing the code "ledger dict: Dict [str, Any]=json. loads (ledger str)". The error message is as follows: "json. decoder. JSONDecodeError: Expecting value: line 1 column 1 (char 0)". May I ask what is the reason for this?
It sounds like empty or not a valid json was returned by the model. The orchestrator shows the ledger results in json formatting iirc so if the model does not give a response with the expected syntax this can happen.
While not with GPT4o, when using this repo with a different model I noticed that sometimes I would get json formatting errors but it would keep going as the model would sometimes return correct json and other times not. I did not have this issue with gpt4o, but I suppose it is possible.
If you are having this issue every time you run, I would maybe take a look in the orchestrator script and see if you can print the returned result to see if the error lies in the formatting of the response being incorrect or something else.
Hey, sorry for bothering was wondering if you'd ever share your thoughts on the MSI 40-inch monitor. Would love to hear long term thoughts on it, would really value your opinion.
By the way to give more context, i'm wanting to hear your opinions on it on productivity and design. I dont care if you dont game.
Of course. Truth be told I am lukewarm on it. I had seen some other folks speak of the text not appearing as clear on some other monitors and I have noticed that myself as well. It is perhaps a setting or something but I keep it in "eco" mode as it is easiest on the eyes and haven't bothered to change any other settings as it works fine even with the weird text.
It seems to not like powering back on after being off for a while, the one time I have had it off since purchasing it it had a weird effect on the corners until it "warmed up" I am not sure if this is normal or not but since it looks fine while on I haven't bothered to investigate further.
In terms of productivity, it has been a big improvement over the old 27" 2440 monitors I had been using. The extra 500px on each side really makes things easier in terms of multi tabbing, etc. I was never someone who liked having multiple apps open at the same time and proffered having any single app full screen, which meant I had to switch back and forth a lot. This has totally changed that, and I in fact never have any windows full screened on this thing anymore at all. For productivity and not gaming this is definitely a large improvement to my workflow at least, and I would imagine it would be the same for someone else's as well.
@@OminousIndustries Thank you so much for sharing your thoughts. A lot of what you've said aligned with what I was hoping i'd gain from it. I've always been a one thing max on a screen guy (ended up with a quad setup recently). So, I hope that this ultrawide can change that in me. I do game with friends sometimes, but I hope the experience is good enough in game to tied me over until more mature 38-40in oled options enter the market.
Cool! I just tried it with gpt-4o-mini since it's cheaper though it doesn't do the OCR, sadly
Good thought to try it with that. The prices will go down as time progresses (hopefully)
GPT-4o-mini is not multimodal in nature. It doesn't support image input
@@QuantumXdeveloper The openai api does support sending it images though. You can use it in both the playground and api
is there a way to use the groq api by chance also when am running the last example am getting [Errno 2] No such file or directory
Yes, but you would have to modify the code to point to groq instead of openai. As for the issue, I would check the issues tab in the repo to see if anyone else has dealt with that.
@@OminousIndustries thanks for getting back to me i ended up figuring it out, i was watching your ollama video on how to run this with the local llms i got it all set up,
edit: i got that working too grate videos bro and thanks for uploading your ollama to github
@@jayt4849 I am very glad to hear, getting to use it with ollama makes it feel more special IMO hahaha. Thanks for the kind words as well!
How does this compare to Chatgpt pro?
Not really a comparison to be made, as the use cases for them are so different.