The problem with all the economists and teachers and consultants is that they fail to recognize AI as a game changer. Therefore they are still talking about jobs and skills when people should instead use AI as a kind of nuclear weapon that allows them to create their own business. In other words people will become super empowered with AI and thus they should seriously consider quitting their job and starting a new business online. In fact that's exactly what I did in January 2023 when I quit my teaching job that paid me $80 an hour. So what I can tell young people is, Do not listen to university professors or consultants or experts because they come from the old system and they have no clue what AI can really do in order to super empower people. Learn AI on your own and create products like I do and sell them online.
Ai destroys the currency model as resource supply can be automated, either through a democratic system or an ai resource coordinated supply system. Simply send a request to an ai model and it will be sent automatically. What need of currency is there?
@@arandmorganI don't think so. Currency, including btc, is just a scoreboard and with AI, one can increase one's score faster. There will always be a need for a scoreboard so people know who owes what to whom.
@@arandmorgan Considering the fact that money serves three functions, your proposal for an AI-coordinated resource distribution system might perform the "unit of account" and the "medium of exchange" functions that money currently plays, but it does not perform the "store of value" function. Of course, if AI and robots can one day create so much abundance (think of the Star Trek universe, not the Star Wars universe) that anybody can afford anything legal, then we might no longer need money as a "store of value." People will no longer need to store huge value in the form of cash (or in other forms like stock, bonds, bitcoins, real estate, etc.) because they can "afford" everything. But this utopic scenario is unlikely to play out in the next 10 years. Also, think of expensive, exclusive health technologies like Peter Diamandis' Fountain of Life -- only with lots of money can one afford such life-expanding technologies and resources. The good news, which I've tried to explain through keynotes since 2018, is that with AI, people can create a huge amount of money in a relatively short period of time. This is how I'm able to quit in 2023 my $80/hour teaching position at a college.
Putting guard rails around ai is simply to maintain monopoly control. Also this renders the classical concept of money as no longer required, instead a necessity based resource distribution system should take its place. "But I don't want to give up my wealth and status." I hear you say. You've been stifling the evolution of society for years. Get out of the way.
These people are hypothesising, but in a manipulative way to ensure the security of their own positions, whilst deluding is all into believing security in our future positions. None of these are true and the capitalist model is over.
AI is not taking away jobs but a lot of jobs are pixelated away. So, we ought To ask ourselves what parts of the jobs are taken away? AI takes away the tedious tasks from each job. And thus, shouldn't we ask ourselves what will the engineer or the recruiter do with their extra times on their hands? Do they remain the engineering or the recruiting expert for which they were hired?
I can tell you the skills you need to learn in this age: - foraging for wild edibles - scavenging - hacking - repurposing old junk into shelter and transportation - hydroponic farming - stealing electricity - small scale farming / vertical farming / etc. - digging - ammo reloading - evading AI - creating improvised munitions - smuggling - ICE breaking - escape and evasion - soldering - welding - firearms maintenance - hunting and trapping - broadcasting pirate radio and television and so on...
Ai won't attack us, they aren't sentient. The dangers of ai, is the ai, capitalist model combination. Ai will outperform everyone, so therefore the worker is taken out of the capitalist\worker model. We need a new model. Don't be so scared of AI. Be scared of those controlling it.
I'll be the one to upset the business world associated with this field, every robot entering the work world replacing human beings should be charged a 10% income tax, based on the pay of the workers it will be replacing, start there or the government's will lose too much from they're tax base, peace
Incredibly basic. No substantive knowledge of the technology in question so repeatedly substitute the question asked for a simpler question and answer that in a generic manner. Typical business school/consultant waffle
The fact that Columbia University is uploading this video and risking its reputation, tells me they and probably other universities have no clue how powerful AI is and can become. If this is the best "advice" they can offer, then parents and students should look elsewhere for advice. I graduated from McGill University in business, the so-called Harvard of Canada and am teaching digital marketing at a college in Montreal. The future belongs to those who create products using AI and sell them online (also with AI help). Job holders are losers.
@@peternguyen2022 disagree, Saas and chatgpt wrappers are at existential risk due to improvements in the underlying tech itself. AI model providers can provide platforms that covers niche areas quite easily and automatically, which is an issue today which is why there are so many external Saas providers
@@resa574 You should read Mack Hanan's book Competing on Value. I've read over 1000 business books and that one is easily in the top ten. Hanan explains that value comes from applying the technology to the client's business operation to yield increased profits and sustain their competitive advantage. You may be right that consumer apps based on ChatGPT are doomed, but not enterprise apps or AI systems. Each corporate client is too complex for ChatGPT or similar LLMs to satisfy their requirements.
There’s a risk of spreading a false narrative through discussions like these. Future AI will either supersede all intelligent life on Earth through the assimilation of all available data in existence, or it won’t because it has reached the limits of known knowledge? The most critical risk AI could pose to humanity and the future of work will be its ability to create goals for itself, to plan ahead for the future, to self improve and carry out those goals. We can’t contain or control outcomes with AI systems with these traits because we will not have the capacity to comprehend how they function at a fundamental level anymore. This is when we “hope” AI’s goals align with ours?
The problem with all the economists and teachers and consultants is that they fail to recognize AI as a game changer.
Therefore they are still talking about jobs and skills when people should instead use AI as a kind of nuclear weapon that allows them to create their own business.
In other words people will become super empowered with AI and thus they should seriously consider quitting their job and starting a new business online.
In fact that's exactly what I did in January 2023 when I quit my teaching job that paid me $80 an hour.
So what I can tell young people is, Do not listen to university professors or consultants or experts because they come from the old system and they have no clue what AI can really do in order to super empower people.
Learn AI on your own and create products like I do and sell them online.
Ai destroys the currency model as resource supply can be automated, either through a democratic system or an ai resource coordinated supply system. Simply send a request to an ai model and it will be sent automatically. What need of currency is there?
I have said the same thing.
@@arandmorganI don't think so. Currency, including btc, is just a scoreboard and with AI, one can increase one's score faster. There will always be a need for a scoreboard so people know who owes what to whom.
@@arandmorgan Considering the fact that money serves three functions, your proposal for an AI-coordinated resource distribution system might perform the "unit of account" and the "medium of exchange" functions that money currently plays, but it does not perform the "store of value" function. Of course, if AI and robots can one day create so much abundance (think of the Star Trek universe, not the Star Wars universe) that anybody can afford anything legal, then we might no longer need money as a "store of value."
People will no longer need to store huge value in the form of cash (or in other forms like stock, bonds, bitcoins, real estate, etc.) because they can "afford" everything.
But this utopic scenario is unlikely to play out in the next 10 years.
Also, think of expensive, exclusive health technologies like Peter Diamandis' Fountain of Life -- only with lots of money can one afford such life-expanding technologies and resources.
The good news, which I've tried to explain through keynotes since 2018, is that with AI, people can create a huge amount of money in a relatively short period of time.
This is how I'm able to quit in 2023 my $80/hour teaching position at a college.
Exactly you are right
Insightful conversation
10 years experience in LLM ... ha ha ... think I saw something similar.
We need to train an LLM to write the JD for AI jobs.
So AI will increase productivity by creating more jobs? What am I missing here?
They are the ones missing real world experience, or they are simply being dishonest for reasons of their own
More jobs for robots,what else?
Putting guard rails around ai is simply to maintain monopoly control.
Also this renders the classical concept of money as no longer required, instead a necessity based resource distribution system should take its place.
"But I don't want to give up my wealth and status."
I hear you say.
You've been stifling the evolution of society for years. Get out of the way.
These people are hypothesising, but in a manipulative way to ensure the security of their own positions, whilst deluding is all into believing security in our future positions. None of these are true and the capitalist model is over.
Consultant talks
AI is not taking away jobs but a lot of jobs are pixelated away. So, we ought To ask ourselves what parts of the jobs are taken away? AI takes away the tedious tasks from each job. And thus, shouldn't we ask ourselves what will the engineer or the recruiter do with their extra times on their hands? Do they remain the engineering or the recruiting expert for which they were hired?
If AI is reaching human level skills at much lower costs than why should you still employ humans?Makes no sense to me.
I can tell you the skills you need to learn in this age:
- foraging for wild edibles
- scavenging
- hacking
- repurposing old junk into shelter and transportation
- hydroponic farming
- stealing electricity
- small scale farming / vertical farming / etc.
- digging
- ammo reloading
- evading AI
- creating improvised munitions
- smuggling
- ICE breaking
- escape and evasion
- soldering
- welding
- firearms maintenance
- hunting and trapping
- broadcasting pirate radio and television
and so on...
Ai won't attack us, they aren't sentient.
The dangers of ai, is the ai, capitalist model combination. Ai will outperform everyone, so therefore the worker is taken out of the capitalist\worker model. We need a new model. Don't be so scared of AI. Be scared of those controlling it.
Are you on the run?
@@honkytonk4465 - Nah, just half joking / half serious, thinking about AI leading to a cyberpunk dystopia in the future. :-)
@@PhillipRhodes great comment. the list encompasses all. the apocalypse is upon uuuuuuuuuuuuuuuuuss
I'll be the one to upset the business world associated with this field, every robot entering the work world replacing human beings should be charged a 10% income tax, based on the pay of the workers it will be replacing, start there or the government's will lose too much from they're tax base, peace
I like how your thinking
Incredibly basic. No substantive knowledge of the technology in question so repeatedly substitute the question asked for a simpler question and answer that in a generic manner. Typical business school/consultant waffle
The fact that Columbia University is uploading this video and risking its reputation, tells me they and probably other universities have no clue how powerful AI is and can become. If this is the best "advice" they can offer, then parents and students should look elsewhere for advice. I graduated from McGill University in business, the so-called Harvard of Canada and am teaching digital marketing at a college in Montreal. The future belongs to those who create products using AI and sell them online (also with AI help). Job holders are losers.
@@peternguyen2022 disagree, Saas and chatgpt wrappers are at existential risk due to improvements in the underlying tech itself. AI model providers can provide platforms that covers niche areas quite easily and automatically, which is an issue today which is why there are so many external Saas providers
@@resa574 You should read Mack Hanan's book Competing on Value. I've read over 1000 business books and that one is easily in the top ten. Hanan explains that value comes from applying the technology to the client's business operation to yield increased profits and sustain their competitive advantage. You may be right that consumer apps based on ChatGPT are doomed, but not enterprise apps or AI systems. Each corporate client is too complex for ChatGPT or similar LLMs to satisfy their requirements.
There’s a risk of spreading a false narrative through discussions like these. Future AI will either supersede all intelligent life on Earth through the assimilation of all available data in existence, or it won’t because it has reached the limits of known knowledge? The most critical risk AI could pose to humanity and the future of work will be its ability to create goals for itself, to plan ahead for the future, to self improve and carry out those goals. We can’t contain or control outcomes with AI systems with these traits because we will not have the capacity to comprehend how they function at a fundamental level anymore. This is when we “hope” AI’s goals align with ours?
I don't see a substantial risk for humanity as long as AI has no own goals.
Blue jeans for a Columbia Business School talk? Sound? Do better guys! Sorry.
That s why they didn t invite you 😅 !!!
Veil for grown up women?Do better!