I designed an app that used Apples coreML implementation of stable diffusion, which was buggy and way behind the python version, and I could just about tolerate that, but the toxic culture around the image models and the usage just made me give up. I am sure the models can be put to amazing use but creating a usable ethical image model is impossible independently, and have you ever tried to use a llm for swift? It's just painful and even if you have a big context running locally and give documentation to read it still doesn't work.
I haven't tried those but I know exactly what you mean. The stuff I've been working on generally is based around gesture and object recognition. All trained on data I created myself, either through taking photos of those objects or tracking my movements with acc/gyro data. It's a TON of fun doing it all yourself though and creating little projects like that. Not a complete loss for you though, at least you can walk away having learned something new!
They're not self-aware. I think it's important we make that distinction. They are just language models. They are not actually capable of rational thought. Linus tech tips had a pretty good video explaining how companies are trying to make it seem like they're decades ahead of where they are in terms of actual technology here. These large language models have been laughably bad as consumer-facing products so far. Studies show chat GPT has inaccurate information over 50% of the time. I wouldn't care so much except for Microsoft. Just acknowledged that they have a 30% increase in their carbon emissions and won't meet their 20/30 goal of being carbon neutral. Strictly because they need to cool all these llma servers
That’s terrible, and all for a little extra cash. Hopefully they can find some efficiencies along the way, but it’s absolutely a waste to spend so much energy on… a chat bot. I probably also should have explained how they’re also non self-aware like some people have also been claiming.
I designed an app that used Apples coreML implementation of stable diffusion, which was buggy and way behind the python version, and I could just about tolerate that, but the toxic culture around the image models and the usage just made me give up. I am sure the models can be put to amazing use but creating a usable ethical image model is impossible independently, and have you ever tried to use a llm for swift? It's just painful and even if you have a big context running locally and give documentation to read it still doesn't work.
I haven't tried those but I know exactly what you mean. The stuff I've been working on generally is based around gesture and object recognition. All trained on data I created myself, either through taking photos of those objects or tracking my movements with acc/gyro data.
It's a TON of fun doing it all yourself though and creating little projects like that.
Not a complete loss for you though, at least you can walk away having learned something new!
They're not self-aware. I think it's important we make that distinction. They are just language models. They are not actually capable of rational thought. Linus tech tips had a pretty good video explaining how companies are trying to make it seem like they're decades ahead of where they are in terms of actual technology here.
These large language models have been laughably bad as consumer-facing products so far. Studies show chat GPT has inaccurate information over 50% of the time.
I wouldn't care so much except for Microsoft. Just acknowledged that they have a 30% increase in their carbon emissions and won't meet their 20/30 goal of being carbon neutral. Strictly because they need to cool all these llma servers
That’s terrible, and all for a little extra cash. Hopefully they can find some efficiencies along the way, but it’s absolutely a waste to spend so much energy on… a chat bot.
I probably also should have explained how they’re also non self-aware like some people have also been claiming.