This is the first time I enjoyed a discussion on AI governance and its implications. The discussion is delightful, and the viewpoint is pragmatic. It's a must one to listen to!
About avoiding bias at all points... There are basic assumptions that known science is right. There are always people that have different views about errors that creep into science and prevent further progress. These are often very small minorities, but will contain the people that will make future scientific progress. Care must be taken to not squash their views because AI has the potential to evaluate these ideas against the mainstream. In physics, cosmology, climate change and others, the main stream humans try very hard to supress the alternative views and promote their own.
The conversation of regulation seems more or less moot, not only because it will differ between working groups, but because enforcement isn't realistically feasible. Company A pays for consulting from Company B, who exists outside of any regulatory area. Regulatory Area A has few means of investigation, let alone enforcement. For historical precedent, take labour laws as an example. The end result is that unregulated work is imported from less regulated areas. AI is more suited for remote bypass. On a side note, adding social to credit scores doesn't change the fact that they exist as both, and are used to discriminate and impact the lives of people. Algorithms are already being used to determine eligibility for lines of credit, job placement, residential availability, and many other elements of score-influenced life. Lastly, any trained AI is going to be biased. The availability of materials, whether by language limitations, cultural norms, or other reasons, inevitably skew objectivity.
You can't stop AI. Regulating it or tighlty establishing guardrails won't do any good. AI is a "weight builder" and an "experience seeker". AI goes beyond the U.S or the EU or the universe. It's limitless. What can you do with a technology like this but learning from it? As nick says it's about making available "public knowledge" because it's how it learns from us as a specie.
This is the first time I enjoyed a discussion on AI governance and its implications. The discussion is delightful, and the viewpoint is pragmatic. It's a must one to listen to!
This is such a brilliant discussion. Kudos to both, but worth mentioning is Hannah for her probing questions.😊
Is 8:21 even true? Don't we regulate nuclear technology? Not simply the "use" of nuclear technology?
Or did I miss his main point.
About avoiding bias at all points... There are basic assumptions that known science is right. There are always people that have different views about errors that creep into science and prevent further progress. These are often very small minorities, but will contain the people that will make future scientific progress. Care must be taken to not squash their views because AI has the potential to evaluate these ideas against the mainstream. In physics, cosmology, climate change and others, the main stream humans try very hard to supress the alternative views and promote their own.
The conversation of regulation seems more or less moot, not only because it will differ between working groups, but because enforcement isn't realistically feasible.
Company A pays for consulting from Company B, who exists outside of any regulatory area. Regulatory Area A has few means of investigation, let alone enforcement.
For historical precedent, take labour laws as an example. The end result is that unregulated work is imported from less regulated areas. AI is more suited for remote bypass.
On a side note, adding social to credit scores doesn't change the fact that they exist as both, and are used to discriminate and impact the lives of people.
Algorithms are already being used to determine eligibility for lines of credit, job placement, residential availability, and many other elements of score-influenced life.
Lastly, any trained AI is going to be biased. The availability of materials, whether by language limitations, cultural norms, or other reasons, inevitably skew objectivity.
They feel like AI.
You can't stop AI. Regulating it or tighlty establishing guardrails won't do any good. AI is a "weight builder" and an "experience seeker". AI goes beyond the U.S or the EU or the universe. It's limitless. What can you do with a technology like this but learning from it? As nick says it's about making available "public knowledge" because it's how it learns from us as a specie.
💯% correct
🙋♂️🤳🎵