Best AI idea is to enrich self, something or a system. for example personal employee, do our casual online things to save time, ex: find important emails and delete/archive the rest.
awesome perspective man, I was thinking about adding it to my app too, but I kept wondering how it would add value. Ultimately I think the best user experience would be to have both, a proper complete UI to do things without using the Chat, then also to be able to manipulate stuff with Chat, but not only through text, but also with Voice. I don't know if I would want to type "I just did a squat", but saying it seems like it would be more natural.
Exactly what the bitcoin bros claim tho. The CLI had demonstrable benefits, whereas bitcoins and AI are poorly shoehorned into everything with weaknesses/future usecases justified with supposed advances in tech. Obviously it's not as extreme, and AI has great usecases *now*, but large claims need more than vague analogies.
Just have pre-baked prompts that users can click on that map to backend functions. You can render those pre-baked prompts as buttons that are always visible in chat off to the left or right.
A lot of stuff nvidia ceo says is VC speak but I think his presentation at computex was actually kind of true. He was talking about how decreasing the cost of inference & AI in general will inevitably lead to it being used in new ways. So I think AI really might just be getting put everywhere, even in places where it seems overkill. Maybe we'll do away entirely with the old fulltext search bars and it'll just become a chatbox with an llm you can ask for directions. I bet a lot of software will try to make their UI operable with a multimodality so you can just talk to the program and tell it what to do.
yea definitely, I watch a lot of those and there is definitely a lot of good in between the VC speak. for me what I'm trying to figure out now is how to use these LLMs and Vector Searches to get strong insights out, and give them to the user without them ever even knowing an AI was involved
Hah, I was thinking how AR sets could improve life in the future. Right now you need to type what you did during the day, or what you planning to do, so LLM can recognize your interests and preferences and provide tools/widgets that could improve your life/efficency, like movies/tv widget to track/recommend movies, or books widget, or gym widget. And in future your AR set could automatically detect what you doing, and provide hints on those activities, like if you doing squats - it provides you tracking for your gym activity, videos/books on how to do them correctly, video on how to use some particular machine, etc.
All of those inputs to the tracker via AI are waaaaay more verbose than clicking the field and typing the number. It fails for the same reason voice assistants do imo. Same for navigation.
I think one solution is to have a help command and call attention to said command noting to use it. This command will tell you available other abilities. This is a common pattern in CLIs and audio menu interphases. But also I think a workout app would best be served as an app but instead of forms/spreadsheet for entering information it's a chat bot assistant behind a button or chat text box
Completely aligned - best UI interface will ALWAYS need a human intervention ( until actual AGI ), and the kind of interface best suited for llm code on frontend actually is on the type / interface level ( no pun intended ). The basic mold of UI would and should be very well defined, like a spreadsheet from your example, but its cells can dynamically conform to different types. Different workouts giving different formats of fields you could edit i.e).
This is just my personal opinion how I would do this, make it mobile, and be able to interact with voice commands besides typing. I don’t like to use voice commands in public but I believe that’s what would work best. And I am saying this because your app seems to replace a few clicks with full text prompts . In any case great product idea for learning! (And also great clickbait 👍 )
Yea I was thinking of this as well, but had the exact same issue. Saying "I just benched 225x5" to myself in the gym is unhinged lmao, voice is not the move, but maybe an AI hidden in the background suggesting actions could work. Needs more exploration!
@@bmdavis419 true. Also think how you would apply it to your other app like insiderviz. I think the approach with chats on webpapps is to be more like an AI guide through your content. When users don’t know where to click/go to see some stuff or if you wanna quickly query something without having to sort and filter all the data is displayed in the webpage. Imagine a user typing something “Show me Microsoft stocks and history from the last month”. Something like that. Making the traditional useless bottom left “help” chat useful.
lol yea this is something I'm working on, I have a really bad habit of making very similar videos because of the way I do stuff, I get really obsessed with stuff and this channel is (currently) mostly stream of consciousness, but I've got a fix in place this should happen less in the future, I just really wanted to get this angle on the topic out!
@@bmdavis419 that's awesome! I went to columbus state community college about a decade ago, but never went to OSU. I moved to Detroit and finished college at a school called Oakland University.
I didn't know vercel was working on stuff like these, you have a great demo! I think the future is where we use LLM without text input. The user interacts with the website as they would normally, but the experience is generated or altered based on how they're using the app. In the background the most basic would be describing the users actions in text to the LLM, but asking the user to type everything is very power-user-mode kinda stuff.
Best AI idea is to enrich self, something or a system. for example personal employee, do our casual online things to save time, ex: find important emails and delete/archive the rest.
awesome perspective man, I was thinking about adding it to my app too, but I kept wondering how it would add value. Ultimately I think the best user experience would be to have both, a proper complete UI to do things without using the Chat, then also to be able to manipulate stuff with Chat, but not only through text, but also with Voice. I don't know if I would want to type "I just did a squat", but saying it seems like it would be more natural.
i think there's a parellelilsm with early computers. We are still in the CLI era
Exactly what the bitcoin bros claim tho. The CLI had demonstrable benefits, whereas bitcoins and AI are poorly shoehorned into everything with weaknesses/future usecases justified with supposed advances in tech. Obviously it's not as extreme, and AI has great usecases *now*, but large claims need more than vague analogies.
Just have pre-baked prompts that users can click on that map to backend functions. You can render those pre-baked prompts as buttons that are always visible in chat off to the left or right.
A lot of stuff nvidia ceo says is VC speak but I think his presentation at computex was actually kind of true. He was talking about how decreasing the cost of inference & AI in general will inevitably lead to it being used in new ways. So I think AI really might just be getting put everywhere, even in places where it seems overkill.
Maybe we'll do away entirely with the old fulltext search bars and it'll just become a chatbox with an llm you can ask for directions. I bet a lot of software will try to make their UI operable with a multimodality so you can just talk to the program and tell it what to do.
yea definitely, I watch a lot of those and there is definitely a lot of good in between the VC speak. for me what I'm trying to figure out now is how to use these LLMs and Vector Searches to get strong insights out, and give them to the user without them ever even knowing an AI was involved
Hah, I was thinking how AR sets could improve life in the future. Right now you need to type what you did during the day, or what you planning to do, so LLM can recognize your interests and preferences and provide tools/widgets that could improve your life/efficency, like movies/tv widget to track/recommend movies, or books widget, or gym widget. And in future your AR set could automatically detect what you doing, and provide hints on those activities, like if you doing squats - it provides you tracking for your gym activity, videos/books on how to do them correctly, video on how to use some particular machine, etc.
All of those inputs to the tracker via AI are waaaaay more verbose than clicking the field and typing the number. It fails for the same reason voice assistants do imo. Same for navigation.
It all comes back to being the same UX as shell terminal 😅
I think one solution is to have a help command and call attention to said command noting to use it. This command will tell you available other abilities. This is a common pattern in CLIs and audio menu interphases. But also I think a workout app would best be served as an app but instead of forms/spreadsheet for entering information it's a chat bot assistant behind a button or chat text box
Making a pokedex chat to learn the gen UI. Seems really cool so far just need to finish it up
Completely aligned - best UI interface will ALWAYS need a human intervention ( until actual AGI ), and the kind of interface best suited for llm code on frontend actually is on the type / interface level ( no pun intended ).
The basic mold of UI would and should be very well defined, like a spreadsheet from your example, but its cells can dynamically conform to different types.
Different workouts giving different formats of fields you could edit i.e).
agreed, I honestly feel like the trick is to figure out how to make the user feel like there is no AI in play at all
Do you think it will be possible to do AI SDK RSC for Vue/Nuxt or other frameworks?
Yep, just have to do more custom work. Will have videos in the future!
This is just my personal opinion how I would do this, make it mobile, and be able to interact with voice commands besides typing. I don’t like to use voice commands in public but I believe that’s what would work best. And I am saying this because your app seems to replace a few clicks with full text prompts . In any case great product idea for learning! (And also great clickbait 👍 )
Yea I was thinking of this as well, but had the exact same issue. Saying "I just benched 225x5" to myself in the gym is unhinged lmao, voice is not the move, but maybe an AI hidden in the background suggesting actions could work.
Needs more exploration!
@@bmdavis419 true. Also think how you would apply it to your other app like insiderviz. I think the approach with chats on webpapps is to be more like an AI guide through your content. When users don’t know where to click/go to see some stuff or if you wanna quickly query something without having to sort and filter all the data is displayed in the webpage. Imagine a user typing something “Show me Microsoft stocks and history from the last month”. Something like that. Making the traditional useless bottom left “help” chat useful.
Again the same topic and doubled video?
lol yea this is something I'm working on, I have a really bad habit of making very similar videos because of the way I do stuff, I get really obsessed with stuff and this channel is (currently) mostly stream of consciousness, but I've got a fix in place
this should happen less in the future, I just really wanted to get this angle on the topic out!
It looks like you are from Columbus, OH. I was born in Columbus but grew up in Chillicothe and Logan (Hocking Hills).
I'm actually at OSU right now!
@@bmdavis419 that's awesome! I went to columbus state community college about a decade ago, but never went to OSU. I moved to Detroit and finished college at a school called Oakland University.
I didn't know vercel was working on stuff like these, you have a great demo!
I think the future is where we use LLM without text input. The user interacts with the website as they would normally, but the experience is generated or altered based on how they're using the app. In the background the most basic would be describing the users actions in text to the LLM, but asking the user to type everything is very power-user-mode kinda stuff.
100% agree with your perspective. Solving the UX question for custom UIs related to LLMs is the new million dollar question.