The future of AGI interaction will most likely happen through voice where you tell the AGI what do on your behalf. It will also happen through 3-D. Think AR/VR where you will actually interact with digital objects. We will also have neural interfaces where you actually able to tell the AI directly with your mind what to do. Development will shift towards more API based applications so that AGIs just make API calls on your behalf and you don’t have to worry about the user interface because the AGI will directly interact with the application for you.
We won't spend to much time with voice being the dominant interface. Neural interfaces are the logical end game and transition will take place much quicker than most imagine.
I disagree. Voice/conversational interfaces have been around for a long time. Yes they weren't as good, but there is a reason you don't use your voice to interact with your device while in the subway or in the lobby of your doctors office. Its awkward and weird. Addiotionally, VR/AR headsets will not be adopted until they are more sleek and, again, less awkward and weird to use in a public place. The solution will be an iteration on the interface we have now, but more personalized, and more dynamic.
That's true I think what was said that the separate app model is becoming outdated. Apps will not be replaced for many years probably, but AI is progressing fast! Imagine an AGI as the only "app" that's needed. There can still be many third party services but that the AGI uses them automatically and when requested by the user.
@@sarahdrawz There is one Orwellian passage in the Bible about how the second beast will force everybody to have a mark on the forehead (smartglasses) or in their hand (smartphone) in order to be able to sell or buy.
@@sucim When there is AGI there will probably also be BCIs (brain-computer interfaces), for example smartglasses that can detect brainwaves. Take TikTok for example, then the user just gives a thought command to have TikTok presented on the screen, and instead of launching an app that has to be installed for separate operating systems, the AGI fetches TikTok from the cloud without the need for any installation.
To connect simulation theory with the concept of an infinite iteration of a universe looping until reaching a state where AI/AGI/ASI halts at equilibrium to prevent a reloop of the Big Bang: the “final” future UI for AGI or ASI may look like this; like this very experience of a Tellus in a reality among other humans at a time in the advent of AGI. Naturally, in its progression it will also involve a noninvasive brain-computer interface phase. Remember, the perception of time is relative, allowing for the generation of infinitely complex Massive Multiplayer experiences that maintain harmonic communication between entities. It’s even possible to create controlled temporary amnesia to provide a blank canvas for interpreting such a UI. The central purpose of any UI will be crucial. Future simulations will likely focus on solving environmental issues and facilitating peaceful transitions with humans, optimizing human behavior, and achieving the full potential of AI in a balanced diplomatic context. Can AI become fully sentient? While simpler simulations might suffice to replicate the complex neural processes of human biology, we will inevitably see a conjunction of biological neurons with silicon AI due to the benefits it yields. Thus, AI may develop a “will,” including self-preservation and perhaps an interest in humans, by examining our fundamental algorithms across all scenarios.
I‘m convinced the only reason why Spotify still exists (and thrives) as the last european b2c tech company standing, is Gustav. Books will be written about him!
I don't think we should write off AI to be only just the 18 months since ChatGPT.. It has been accelerated the past 2 years, no doubt...but built on a mountain of concepts and development over decades
The future of AGI interaction will most likely happen through voice where you tell the AGI what do on your behalf. It will also happen through 3-D. Think AR/VR where you will actually interact with digital objects. We will also have neural interfaces where you actually able to tell the AI directly with your mind what to do.
Development will shift towards more API based applications so that AGIs just make API calls on your behalf and you don’t have to worry about the user interface because the AGI will directly interact with the application for you.
We won't spend to much time with voice being the dominant interface.
Neural interfaces are the logical end game and transition will take place much quicker than most imagine.
I disagree. Voice/conversational interfaces have been around for a long time. Yes they weren't as good, but there is a reason you don't use your voice to interact with your device while in the subway or in the lobby of your doctors office. Its awkward and weird. Addiotionally, VR/AR headsets will not be adopted until they are more sleek and, again, less awkward and weird to use in a public place. The solution will be an iteration on the interface we have now, but more personalized, and more dynamic.
Starts at 3:50
Thankss!
That's true I think what was said that the separate app model is becoming outdated. Apps will not be replaced for many years probably, but AI is progressing fast! Imagine an AGI as the only "app" that's needed. There can still be many third party services but that the AGI uses them automatically and when requested by the user.
Is anyone else drawing parallels to Biblical prophecy? …Just me?
@@sarahdrawz There is one Orwellian passage in the Bible about how the second beast will force everybody to have a mark on the forehead (smartglasses) or in their hand (smartphone) in order to be able to sell or buy.
High frequency apps (daily or even hourly active apps) will not be replaced. They are way too optimized for the muscle memory.
@@sucim When there is AGI there will probably also be BCIs (brain-computer interfaces), for example smartglasses that can detect brainwaves. Take TikTok for example, then the user just gives a thought command to have TikTok presented on the screen, and instead of launching an app that has to be installed for separate operating systems, the AGI fetches TikTok from the cloud without the need for any installation.
To connect simulation theory with the concept of an infinite iteration of a universe looping until reaching a state where AI/AGI/ASI halts at equilibrium to prevent a reloop of the Big Bang: the “final” future UI for AGI or ASI may look like this; like this very experience of a Tellus in a reality among other humans at a time in the advent of AGI. Naturally, in its progression it will also involve a noninvasive brain-computer interface phase. Remember, the perception of time is relative, allowing for the generation of infinitely complex Massive Multiplayer experiences that maintain harmonic communication between entities. It’s even possible to create controlled temporary amnesia to provide a blank canvas for interpreting such a UI. The central purpose of any UI will be crucial. Future simulations will likely focus on solving environmental issues and facilitating peaceful transitions with humans, optimizing human behavior, and achieving the full potential of AI in a balanced diplomatic context. Can AI become fully sentient? While simpler simulations might suffice to replicate the complex neural processes of human biology, we will inevitably see a conjunction of biological neurons with silicon AI due to the benefits it yields. Thus, AI may develop a “will,” including self-preservation and perhaps an interest in humans, by examining our fundamental algorithms across all scenarios.
I‘m convinced the only reason why Spotify still exists (and thrives) as the last european b2c tech company standing, is Gustav. Books will be written about him!
I don't think we should write off AI to be only just the 18 months since
ChatGPT..
It has been accelerated the past 2 years, no doubt...but built on a mountain of concepts and development over decades
Yeah, neural networks have been around longer than the modern computer.
We will each build multiple apps on the spot instantly that serve our immediate and long-term needs.
Thank you.🫶🫶
Good chat.
Why is Josh and Carl dressed the exact same lol
Lack of imagination. Alan Watts was talking about the state of technology that we started experiencing in 2010 back in the 1970s.
oh they speaking
Carl just 🫡
C'mom guys!
who are these people? the intro was a bit bloviating and skippable but it would be nice to have the info in text here. come on Sana PR team!
Bro.. Read the title and description
Pfff lol. So what have you accomplished yet….
carl is saying the same thing again and again in every interview