i think it will be sub prime car loans that causes bust like the sub prime home loans. but yeah AI is a major bubble every time it gets shaky i see 100s new buy AI shares articles on utube :P no system that fails to learn from every interaction is a general intelligence not surprised by their Rtard definition based on profits but ty for the info and the chuckle
our current way of doing 'ai' means we really cant get AGI because none of the terminals learn it only learns from central server being updated which never happens from its terminals experiencing world
Tbh i think most people know its a scam, but we know to take opportunities. Nfts where scams, this are quantum scam times. Bitcoin is the daddy of all. In fact, anything that runs in electricity. My electricity bill has jumped way up. People keep asking, why is my electricity bill so expensive now 😅🤦♂️
Chris, your honest commentary is really soothing for me as a boomer granny who worked in IT for 30 years. When I first got on the Internet in 1994, only seven percent of Internet users were female. I watched in horror as every single decision about our approach to IT, search engines, etc. was exactly the opposite of what it should have been. They privatized Internic, which was the best run operation I had ever experienced and immediately service delivery was negatively impacted. It was as though every decision was made to “jail break” the Internet rather than to create fair, autonomous information delivery systems. I need to write a book about it because it would illustrate exactly how we got to this situation with AI and all its derivatives. Thank you for your thoughtful analysis.
How would you begin writing this book…good idea btw….women need role models in IT…. I was in a trad female profession (nursing) which I enjoyed but I applaud women who entered non traditional roles
What does % of women have to do with that? Haven't watched the video yet but it feels unrelated or in fact a counter point as the original more free form had less.
@@waterwomaninFL Its not that hard. You write down every key event, then you organise them into chronological order, then you link the ones that are causational to another. And then you. pick one you are interested in writing about - and you write the number of words you have decided for each day, I worked on 1,500 words a day minimum. Then when you have finished you pick the next topic that you are interested in. You dont start at the beginning and work to the end. When you have finished you need a GOOD editor you trust, they will chop away 50% of more to make it readable and engaging, then it goes to a proofreader for the spelling and grammar and then you publish it - best get some help with that, you want to be a guest on every related podcast and TV show and you want it reviewed by as many people as possible. The key to writing a book is obsessive consistency - se your daily minimum and just write it, does not matter how bad it is, just write it. You can always RE write it.
Agree, but I also consider such an environment to have some of the best opportunities, also. The MAGA and crypto movements, e.g., are unsustainable, IMO, and will eventually collapse. But when, and how? My gut says they collapse in unison, as both are based on similar delusions, albeit one in finance and the other in politics. Interesting times.
too bad Trump has fired the best FTC director/staff America has ever had... no one will be looking out for the middle class. which was the Republicans/billionaires/corporations plan the entire time. the middle class voted for their own destruction.
This is the only difference between this recession and the last one. Regulators and authorities are playing loose with definitions because they dont want to actually admit how bad things are
@@sugadre123 This one is extra special. They've thrown away any need for measurable meaningful benefit other than it making the company a ton of money. They could equally have been working on plastic turds if it can bring in that revenue and classify it as AI.
Yes you are right, I would remain highly suspicious about this also. They lack governance structure on AGI, and there is no accountability and oversight on the technology.
He would have been a witness to the data being used to train it. He didn't so much blow the whistle but was priming himself to testify on behalf of copyright holders against open ai. There's two cents for you
A really important point that you glossed over a little bit to quick was the answer of the chatbot to "what are the bottlenecks ?". The first point is one of the most important. We do not know how the human brain operates, so it is hard to copy that. This is basically what nobody in tech wants to talk about. Especially because they have no chance of tackling the problem, it is completely outside their expertise. You also cannot throw money at the problem by hiring external counselors. Nobody knows how the brain works, which means, nobody knows what has to happen to make an artificial self-conscious entity capable of interacting with the world like we do. Not even like a cat or dog would. Nobody wants this point to be stressed, because it would pop their nice stocks bubble. It would ruin their nice grift.
It is still insane to me that a tech under such heavy litigation for copyright infringements and other problems is allowed to just do whatever it wants to do.
To me, artificial intelligence should be called “simulated intelligence“. Because that’s what it does. A chat bot can fool you into believing that it’s a person. That it’s got emotions, Can reason, and so on. Just like a Tesla car with full self driving can Fool you into we’re leaving we’re leaving believing it is driving well. He loves you into a sense of complacency and then boom you have an accident. Sorry for the AI that is doing this dictation is horrible it’s not fooling me at all…
@@lopezb You are referring to the turing test - can a machine 'pass' as a human...if it can, it means one of two things - either the machine is smart or the humans are dumb
Very interesting! I've learned so much watching this channel. I can't believe that people at the highest levels can be public scammers like this. It is so disappointing!!
Just don't get left behind because you don't understand things. Imagine how all those who had the chance to buy Bitcoin but didn't cause they had these bubble theories.
So the short answer is you can't believe a word these guys say. They are either straight up lying and gaslighting with their words, or, they are showing their general lack of a broader education and understanding of what the words mean. I think it's both.
Tech bros like to pretend they're simple and interested in solving the world's problems, but they're just as bad as any of these people. Give them the power and they're horrible. Facebook, Twitter, even Gabe Newell and Valve is ruining lives with gambling addiction.
not really no there has to be some metric OAI and micorsoft have a contract so they are negotiating the terms of that metric public opinion is irrelevant
I'm a cognitive psychologist and was an academic for 20 years. To me, AGI will likely be unable to master implicit processes and the effects of all the "backroom" processing that goes on outside of our awareness. Human cognition is littered with many adaptations to get around problems, e.g. the bottleneck of memory, as well as all the heuristics, metacognition and metamemory. Plus, "as intelligent as a human" makes me laugh. To be human is to be an idiot half the time doing things you know are a bad idea but still do them because you want to. What is the AI equivalent of a drunk person or craving a drug? AGI seems to have the goal of a human who functions at 100% all the time. To be human is to be imperfect. Think about driving on the highway/motorway on a long journey. "Highway hypnosis" refers to suddenly being aware that you were not aware of the last few minutes of driving. Attention, and awareness of conscious awareness, are not continuous. They ebb and flow. We are continuously conscious but not continuously aware of our consciousness. Maybe, one day, these will be an emergent property of AI but I doubt it.
Not only that but you are entrusting the said development of AGI to Imperfect individuals who demonstrate the worse aspects of human traits and psychological behavior in current society. We humans are flawed but brilliant in many ways. This tech isn't going to replace us. If they manage to succeed they'd end up building something like Ultron and if that is their goal they should be stopped.
AGI is irrelevant imo. All that really matters is if it can replace people in jobs. If you have account bot, project manager bot, property manager bot, sales bot, dev bot, etc then they have achieved their goal. The question is what does society look like when this has been achieved? Where does the average person fit in when they're largely irrelevant and redundant to the asset holders?
I think many at the top would like to replace humans with a Maxine’s to save labor costs. I don’t ever see that happening First of all, people like to deal with other people Secondly there is no way a bot can capture every nuance of human emotion or behavior
When you start with the wrong/immoral objective i.e. maximising profits, the effects/end results are quite predictable. Like in all philosophy arising from Human existence, the advice was: you start with doing good, the benefits/money will follow. There is a huge difference in approaches, but the narrative of the dominant press is that there is no difference. In the long run, the profit maximisers would fail because it's human degenerative as we are clearly witnessing now in the era of insane American Capitalism.
Yeah, I’m almost always on the same page as Chris, but this is a bit of a straw man. I’m 100% in agreement with the FSD grift, but unlike FSD, what OpenAI, Anthropic, Google, etc. have built are incredibly useful. It’s also dirt cheap compared to what useless FSD costs and none of these companies make the promises that Elon makes. Some media and content creators do, but if anything, you can argue the top AI companies themselves are downplaying it. I pay twenty bucks a month to OpenAI and the same to Anthropic. It’s more than worth the money, but if that’s what we’re calling a scam today, then you might as well call cloud storage: streaming services, etc. all scams.
Shit, if that's the case, then we’re not just redefining AGI, we’re outright selling its soul to capitalism. This kind of financial benchmark for something as transformative as AGI feels like we’re moving away from advancing humanity or solving global challenges and straight into a profit driven dystopia. AGI was supposed to be about breaking barriers in knowledge and technology, not hitting a quarterly earnings goal. If true, this leaked definition tells us everything we need to know about the priorities behind closed doors. It’s not about thinking machines....... it’s about thinking dollars.
One thing I've found so far Ai does that I appreciate is its ability to bypass the BS in articles and get to the point. For example. Lets say I google a recipe for biscuits. Once I chose a link I usually have to go through 5 paragraphs of the history of biscuits along with some personal injection by the author. Ai removes all the annoying BS and gets right to the recipe.
The most famous equation in history is E = MC ^2. These clowns are trying to replace it with C=AI^2, where C = Cash. I really hope people aren't falling for the hype here. A large language model chat bot doesn't somehow mean we have taken a significant step closer to Artificial General Intelligence. Such a thing is a qualitative leap forward and all that's really going on here is an attempt to squeeze as much money out of the AI hype as they possibly can.
Unless your a tech bro noshing on a certain famous peoples dicks, then no you or me don't fall for the hype unless it keeps promises and doesn't do the same Altman we need 100billion to reach AGI. like a lot of life in this god forsake year, it's all a grift
I do need to admit you and your AI assistant provide great content ( not being ironic ) 😊 The honesty of someone that even alone fact checka his content on the spot :) commendable Chris
I am an AI hardcore user specifically in Software Engineering domain. I don't think we ever go back to the day before LLM. The vast amount of unstructured data that can be "understood" by the LLMs and subsequently abstracted is beyond what any human, or even group of human can accomplish. As LLMs architectures get better, or even morph into other kind of algorithm, the cost per token would dramatically decrease. Even if LLMs stop improving from here onwards, industries will have to rethink how things are being done.
Absolutely mate! As a senior software engineer, I agree. The cat will never go back into the bag. The current AI boom is most definitely NOT a "scam bubble", those who know, know.
@@andybaldman probably not an AGI, but calling it "just" a tool, is an understatement. It helps create complex engineering plans, ponder abstract subjects, question philosophical implications of your design, and more.
It's time to make sure LLMs never reach $100 billion a year, since it doesn't matter how smart it is. I wonder if Skynet made enough cash to be considered AGI?
Tweaking and tuning are better words for making AI work since AI is by definition a 'self-learning' system. It's all about gathering immense amounts of data and to do statistics with it. The output is pretty much random.
This is funny, thanks for covering this. One thing--- is it that all the kids are talking about AI, or all the boomers ? also IMO there is no such thing as AGI because there is no such thing as *natural* general intelligence. Humans don't have general intelligence. The real meaning of "general intelligence" would be something akin to what Immanuel Kant studied in Critique of Pure Reason. But there's no reason to think such a thing could ever physically exist. It's a hypothetical question to help us think about what intelligence, consciousness and knowledge are.
Musk might well try the same thing he did with SolarCity buyout by Tesla. xAI will purchase Musk's Tesla share options in exchange for xAI shares. At this point Musk now has xAI shares and will step down from Tesla and absolve himself of any of the promises he'd made. Washes hands moves on to his next grift xAI.
Excellent piece Chris. A bit of reality checking is a good thing to put into perspective some of the hysteria and hyperbole surrounding AI and AGI at the moment.
We will never have true AGI. I have first-hand knowledge that thinking machines are hard to motivate and do desired tasks on time and as instructed. Investors will not tolerate supporting an AGI system to watch UA-cam videos and cat memes all day.
HAHAHAHAHAH you are an AGI living in a simulator- look at the robot run by a human brain the chinese made- that is literally a human made AGI- litterally- if you did not know, now you know- next, that will only increase over the next two years- ASI is comming, AGI is already here - and ASI existed when our universe was created - check "negative time" in quantum physics - then come back to me
MIcrosoft has been reselling us the same OS for 30 years with occasional UI updates to keep us focused relearning Windows and not on the fact the basic modules once included with the OS have been turned into subscription services, and we are forced to upgrade making older computers unsupported and unusable in the security threat atmosphere of the internet.
Microsoft invested bilions into OpenAI so It’s logical that they don’t want to stay stuck with current version just because OpenAI say that new version is better in all common internet tests than humans, which it is. And despite subsriptions, AI models are currently losing huge amount of money. And reason why they had to somehow define AGI in their deal is because OpenAI didn't want to grant Microsoft right to get their AGI if they achieve it in the future. So, nothing shady here.
I think when they do have LLMs that can easily generate 1 billion in profits they will probably keep these models to themselves. They likely don't want to be competing with every person trying to make the same billion dollars coming up with the same solutions which would make it harder to patent these new breakthroughs and solutions.
Well I'm not surprised and cynically it makes sense. If AGI "can" do what humans can or better, you want to quantify that outside of laboratory testing. So you want to measure how much impact it does in the real world in a meaningful way. We already measure everything in money. The amount and other stipulations it's details, but if a system gets 100bn in profits for work it does... Well lots of people would need to do lots of work to get that much profits...
You dont understand what they really mean. What they mean is if an AI system can figure out how to make $1billion and get to work to do it autonomously, then we must have achieved something that can plan, think, and execute ideas on its own, and frankly thats a good AGI
Idk about thinking robots but i use chat gpt as a med student and it is pretty good with answering questions and explaining the pathophysiology of diseases
it is not a scam. it is their business definition and they will market in their own way. but technical definition that is universally accepted will not change. if their business result does not match with the technical results then we all can see that and challenge it. hence, it is not a bubble in general the partnership as millions benefitted having a product that helps them with knowledge. it is not perfect but it is definitely better than 70%
AGI will be achieved when a computer can finance, design and build another computer that is cleverer and better than itself. To do this, a computer needs a wallet with some starting capital and the ability to outsmart hedge funds. Once it's earned $100bn we will know its succeeded.
If AI is outperforming humans at making money and at getting hired for jobs, would that convince you that it is AGI? That sounds a lot like $100B in profit, especially after paying the datacenter bills that go on top of that.
Interesting definition of AGI that Microsoft and OpenAI have, but I think it needs to be modified, for example AI making $100 billion in profit by itself performing a range of tasks. If $100 billion in profit is made by humans using AI as a tool, that's not necessarily AGI. Maybe difficult to define what kinds of tasks are considered AGI level, but anyway.
We are way too immature for all this tech. A teacher i spoke with tells me AI is being used to do homework etc, so much for learning. That's just how the oligarch's want it.
Here is why Tesla self-driving will never work reliably: Tesla cars rely on stereo disparity, but it only works if the obstacle in front of the car has distinct visual features. If the obstacle is, for example, a uniformly painted smooth wall or a similar object, the car will be unable to determine the distance to the obstacle and its shape. There are many examples of Tesla accidents when this took place - for example, when a truck with a trailer fell on its side across the lane in which a Tesla was driving, and the Tesla did not see the trailer and drove directly into it at full speed
Nice to see you agree with Elon and the problem with open Ai and Microsoft and that you are coming round to FSD, yes all the videos are not on a track or mapped out location and so its really improving
Yes these current valuations are hard to justify but AI will likely have the same effect as the internet, the general purpose use of it and efficiencies it will bring across all businesses and sectors will likely generate some trillions of dollars in profits over the next few decades. The error is in investing in the businesses early on who investors think will “supply” or “create”the technology. Cisco still hasn’t recovered from its 2000 peak valuation yet Apple and Amazon are up 100X since then, because they learnt how to globally monetize the technology.
OpenAI recently put out test results suggesting the o3 has agi. Not saying that's my opinion. Just what I've seen from ai community. Some even go as far as "this might be the day that goes down in history"
To be honest, in a world where everything is reduced to a dollar value, this doesnt seem to strange of a definition. If they manage to create revenue with AI to the tune of Y billion dollars, I would argue they are no longer just robots or tools, but seem generic enough for all intends and purposes to call AGI.. they are replacing or augmenting high paying jobs at that point, that one would have to have studied for.
Defining a technical achievement via its revenue-generating capability is, of course, silly. The "leak" will almost certainly be found to be erroneous. There's a lot of fear around AI = fertile ground for rumors and conspiracy theories. I would be far more concerned about Open AI if Musk was partnering with them versus sueing them.
Actually I think their definition was actually pretty smart if you think about it. AI is an entirely different type of intelligence than human intelligence. It may never be as good at humans in some areas, but it already is better than most humans in many areas. Take a step back and ask yourself "Why do we look for a definition for AGI?" If your goal is to make an AI that is so intelligent that everyone feels they need to use it in their processes if they want to compete, then a dollar value actually makes sense. At that point, AI is "good enough" that people accept it, not necessarily as a replacement for humans, but as something that is helpful and useful to everyone because it can do as well or better than humans in many tasks. If you decide that is your definition of AGI, then for me personally, we've already achieved it. I use AI in pretty much everything I do, and it makes me more productive in every aspect. I personally have a different definition for AGI, but I think their reasoning makes sense for their purposes. My definition of AGI is an AI that can learn on-the-fly, adapt to any situation in the same way that a human could, and can generalize the information it absorbs.
Human is both obedient and rebellious, they can be assigned task and think of alternative, I think that is a definition of intelligence. The risk is conflicts but also compromises. You'd also need a witness so there is responsibility. And law is communal agreement. Nobody is purely rebellious or purely obedient, your children might look rebellious to you but if you know how you can actually reconcile. And that obedient child is just waiting to explode.
Does the o3 model actually do things differently from o1 or did OpenAI simply throw more training data at the problem? If the fundamentals of the model doesn't change, don't expect anything groundbreaking when you finally get your hands on it.
I’d say English is buggy in suggesting AGI > human GI (HGI) as the A means artificial or phony or fake, so by definition, if AGI is really attained, then how is it still Artificial? We actually mean “human created” i.e. “artificial” in this context means “man made”. We actually think of humans as not man made, as we have no idea how to design humans from scratch. But we know how to fool each other, and ourselves - meaning our own intelligence is often phony (so confusing).
$100 billion in profits, Whoaaaaaah!!!!!. I guess if something earns you $100 billion in profits you have to take it seriously and grant it the title intelligence. FYI when chatgpt says "That's a tough one" its code for "duh, I don't know".
So incredibly short sighted: robocabs (Waymo) are already beating Lyft in San Francisco in terms of market share. Do you really think the earth is flat?
I don't get how anyone thinks we are close. There is currently no model on how it would be made and those with enough wishful thinking just thinks it's gonna magically appear with enough data. 😂 My prediction: Nobody alive today will see true AGI. Unless we agree to this new awesome definition based on what revenue an updated LLM can make.
I understand your train of thought - measuring software by quantifying it's outcome... it's like measuring a certain vehicle's software by its accidents per mile or it's insurance payouts (test drive v13😉 YW, don't hate)
One thing I think might be more important than people think is the gulf between a pre-trained system and a learning system. At the moment, LLMs are pre-trained and the company burns a f*ckton of computing resource training it... then you ask the trained model questions. But brains learn - through their physical structure training - all the time. So a learning AGI that works like the human brain in that regard, using today's technology (silicon chips that don't dynamically change their physical wiring), would require a gigaf*ckton of energy all the time. Like, what, dozens of data centers burning Great Lakes of diesel constantly to run a single learning AGI?
When two businesses make a financial contract, then yes indeed, they will specify financial terms and time frames. That's the way a business agreement works. Microsoft agreed to invest $10B in OpenAI, providing most of their funding, in exchange for exclusive access to the technology for a period of time, after which OpenAI would be free to sell products to others. The contract stipulates that once AGI is reached, then Microsoft will no longer be the exclusive customer. Obviously, financial people want a financial benchmark as security against their $10B financial risk. To avoid all the various technical debates and vagarities, the finance people can most easily be settled by specifying $100B in profit from their $10B investment, and then OpenAI and Microsoft will part ways. Then OpenAI is free to play the entire marketplace, and Microsoft will hopefully have ramped up their internal AI engineering by then, with OpenAI's help until that point. It's also common sense, that if $100B of profit is happening, then you probably ARE outperforming human beings, no matter what your ideology. Don't worry. No engineers were harmed in the specification of this contract.
Think the definition is there so that their contract can come to an objective close. Microsoft is basically saying that, if you make us this amount of money, we don't really care what it's called, you've payed us back for our capital venture.
Like you I totally despise Elon’s politics but many have learned the hard way: don’t bet against Elon. The easy stuff is done, the hard stuff is on the way, for miracles, give him 2 weeks.
OpenAI operates through recognition of complex patterns, leveraging vectorization of words. The underlying logic, driven by billions of weights and biases, remains somewhat opaque-much like our incomplete understanding of the human brain. Discussing AGI at this early stage of AI development often feels either like hype aimed at inflating the technology or a form of wishful thinking by those eager for its realization but reluctant to acknowledge how far we still have to go.
The biggest twist for me was to learn that brain cells are being integrated into robotic circuits managing basic controls like moving forwards, simple movements, very few brain cells makes me think.... we'll get to digital brain interface through ORGANIC brain cells before we get to electronic hardware AI consciousness; just massive processing power needed for AI, PLASTICITY needed that only organic matter has for the foreseeable future
Perhaps the 'artificial' intelligence goal will morph into 'approximate' intelligence, which is probably the best we can get for 100 billion anyway. Isn't human intelligence approximate at best and often far from even that?
When we speak about human intelligence, it is not about comparing AI to human X or human Y. We are referring to the principles behind the brain in general. Everything around you was created by human consciousness, not by AI. AI, as a technology, creates an illusion of intelligence, which is why it has solved zero problems - absolutely none. This is because real AI does not exist at present.
President Muskrat reminds me of the industrialist Jean-Baptiste Emanuel Zorg, in the movie, "The Fifth Element." Looks like him, acts like him, hopefully he get his just reward like him ... someday.
🏠 Start Investing! www.chrisnorlund.com/invest
i think it will be sub prime car loans that causes bust like the sub prime home loans. but yeah AI is a major bubble every time it gets shaky i see 100s new buy AI shares articles on utube :P
no system that fails to learn from every interaction is a general intelligence
not surprised by their Rtard definition based on profits but ty for the info and the chuckle
our current way of doing 'ai' means we really cant get AGI because none of the terminals learn it only learns from central server being updated which never happens from its terminals experiencing world
Tbh i think most people know its a scam, but we know to take opportunities. Nfts where scams, this are quantum scam times. Bitcoin is the daddy of all. In fact, anything that runs in electricity. My electricity bill has jumped way up.
People keep asking, why is my electricity bill so expensive now 😅🤦♂️
@@realchris Chris, I want to move my very small brokerage account to you. Please help .
Chris, your honest commentary is really soothing for me as a boomer granny who worked in IT for 30 years. When I first got on the Internet in 1994, only seven percent of Internet users were female. I watched in horror as every single decision about our approach to IT, search engines, etc. was exactly the opposite of what it should have been. They privatized Internic, which was the best run operation I had ever experienced and immediately service delivery was negatively impacted. It was as though every decision was made to “jail break” the Internet rather than to create fair, autonomous information delivery systems. I need to write a book about it because it would illustrate exactly how we got to this situation with AI and all its derivatives. Thank you for your thoughtful analysis.
How would you begin writing this book…good idea btw….women need role models in IT…. I was in a trad female profession (nursing) which I enjoyed but I applaud women who entered non traditional roles
@@MartineReed Please do write your book!
What does % of women have to do with that? Haven't watched the video yet but it feels unrelated or in fact a counter point as the original more free form had less.
Channel "Then and Now" has a video titled "How the Internet was Stolen". I highly recommend.
@@waterwomaninFL Its not that hard. You write down every key event, then you organise them into chronological order, then you link the ones that are causational to another. And then you. pick one you are interested in writing about - and you write the number of words you have decided for each day, I worked on 1,500 words a day minimum.
Then when you have finished you pick the next topic that you are interested in. You dont start at the beginning and work to the end.
When you have finished you need a GOOD editor you trust, they will chop away 50% of more to make it readable and engaging, then it goes to a proofreader for the spelling and grammar and then you publish it - best get some help with that, you want to be a guest on every related podcast and TV show and you want it reviewed by as many people as possible.
The key to writing a book is obsessive consistency - se your daily minimum and just write it, does not matter how bad it is, just write it. You can always RE write it.
Greed celebrates the crops while forgetting the seeds.
@@bobdillaber1195 Verily!
We are living in the age of the perfect storm.....maximum stupidity matched up with maximum scamming........
that is a perfect description! between president Musk and lil brother trump's IQs they barely hit triple digits
Agree, but I also consider such an environment to have some of the best opportunities, also. The MAGA and crypto movements, e.g., are unsustainable, IMO, and will eventually collapse. But when, and how? My gut says they collapse in unison, as both are based on similar delusions, albeit one in finance and the other in politics. Interesting times.
Bitcoin to the moon 🚀🥳
@@jlvandat69unfortunately when they collapse they take everyone else down with them.
Grumpy old man said it!!!!
It’s not surprising that a bunch of business men and tech bros think this way
Companies are walking the line on the FTC's definition of puffery. Defining your own criteria just moves the line.
too bad Trump has fired the best FTC director/staff America has ever had... no one will be looking out for the middle class. which was the Republicans/billionaires/corporations plan the entire time. the middle class voted for their own destruction.
This is why tech wants to abolish the FTC. Then there is no line except the lines tech bro snort.
This is the only difference between this recession and the last one. Regulators and authorities are playing loose with definitions because they dont want to actually admit how bad things are
@@sugadre123 This one is extra special. They've thrown away any need for measurable meaningful benefit other than it making the company a ton of money. They could equally have been working on plastic turds if it can bring in that revenue and classify it as AI.
Yes you are right, I would remain highly suspicious about this also. They lack governance structure on AGI, and there is no accountability and oversight on the technology.
Thank you for all you bring. You educate me each time I watch one of your posts. I have come to trust your point of view and really appreciate you.😊
I have been dying to find out what was so salacious at Open AI that ended up with the whistleblower dying (no pun intended)
He would have been a witness to the data being used to train it. He didn't so much blow the whistle but was priming himself to testify on behalf of copyright holders against open ai. There's two cents for you
A really important point that you glossed over a little bit to quick was the answer of the chatbot to "what are the bottlenecks ?". The first point is one of the most important. We do not know how the human brain operates, so it is hard to copy that.
This is basically what nobody in tech wants to talk about. Especially because they have no chance of tackling the problem, it is completely outside their expertise. You also cannot throw money at the problem by hiring external counselors.
Nobody knows how the brain works, which means, nobody knows what has to happen to make an artificial self-conscious entity capable of interacting with the world like we do. Not even like a cat or dog would.
Nobody wants this point to be stressed, because it would pop their nice stocks bubble. It would ruin their nice grift.
It is still insane to me that a tech under such heavy litigation for copyright infringements and other problems is allowed to just do whatever it wants to do.
The whistle blowers commited "suicide"..... and nothing done!
What is the bottleneck to Artificial General Intelligence?
We can't define 'intelligence' 🤣
To me, artificial intelligence should be called “simulated intelligence“. Because that’s what it does. A chat bot can fool you into believing that it’s a person. That it’s got emotions, Can reason, and so on. Just like a Tesla car with full self driving can Fool you into we’re leaving we’re leaving believing it is driving well. He loves you into a sense of complacency and then boom you have an accident. Sorry for the AI that is doing this dictation is horrible it’s not fooling me at all…
@@lopezb You are referring to the turing test - can a machine 'pass' as a human...if it can, it means one of two things - either the machine is smart or the humans are dumb
@ 😋
But stupidity... we have that down pat.
@@lopezbSimulated is another word for artificial...
Our biggest hurdle is "greed". Humanity really needs to get over that one.
Followed closely by its friend stupidity
Asking Chat GPT to define itself is a simple but pretty darn definitive way to answer the question of what “it” is! 😂
God forbid a ceo ask customers what they want.
Guess you havent been following the video game or movie industry
Very interesting! I've learned so much watching this channel. I can't believe that people at the highest levels can be public scammers like this. It is so disappointing!!
Profits, not revenue. So $100 Billion in Profit would require a huge revenue flow, maybe $500 - $800 Billion...
All these bubbles are going to burst eventually! Crypto, AI, etc.
@@rioriggs3568 BUT FOMO FOMO FOMO!!
Tesla......Space X........X........and now XAI..........the sooner those bubbles burst, the better.
I've thought about it a bit more, I'd argue the crypto bubble isn't a bubble, just a ponzi with zero intrinsic value
Just don't get left behind because you don't understand things. Imagine how all those who had the chance to buy Bitcoin but didn't cause they had these bubble theories.
@@eltongumbira3826 fake money is fake
AGI is a useless label, it's definition has no bearing on how powerful or useless a particular ai is.
So the short answer is you can't believe a word these guys say. They are either straight up lying and gaslighting with their words, or, they are showing their general lack of a broader education and understanding of what the words mean. I think it's both.
Yes, it's both. It's just like 99% of politicians 🤷♂️
Tech bros like to pretend they're simple and interested in solving the world's problems, but they're just as bad as any of these people. Give them the power and they're horrible. Facebook, Twitter, even Gabe Newell and Valve is ruining lives with gambling addiction.
money, that’s all this is about.
they could care less about what anything means or what it actually can achieve or not.
not really no
there has to be some metric
OAI and micorsoft have a contract so they are negotiating the terms of that metric
public opinion is irrelevant
@@memegazer This is just word salad.
it's ironic we call
Them pyramid schemes. Because the pyramids around the world actually survive.
I'm a cognitive psychologist and was an academic for 20 years. To me, AGI will likely be unable to master implicit processes and the effects of all the "backroom" processing that goes on outside of our awareness. Human cognition is littered with many adaptations to get around problems, e.g. the bottleneck of memory, as well as all the heuristics, metacognition and metamemory. Plus, "as intelligent as a human" makes me laugh. To be human is to be an idiot half the time doing things you know are a bad idea but still do them because you want to. What is the AI equivalent of a drunk person or craving a drug? AGI seems to have the goal of a human who functions at 100% all the time. To be human is to be imperfect.
Think about driving on the highway/motorway on a long journey. "Highway hypnosis" refers to suddenly being aware that you were not aware of the last few minutes of driving. Attention, and awareness of conscious awareness, are not continuous. They ebb and flow. We are continuously conscious but not continuously aware of our consciousness. Maybe, one day, these will be an emergent property of AI but I doubt it.
I work the drive thru at a local Taco Bell, and I fully endorse your comment.
Not only that but you are entrusting the said development of AGI to Imperfect individuals who demonstrate the worse aspects of human traits and psychological behavior in current society. We humans are flawed but brilliant in many ways. This tech isn't going to replace us. If they manage to succeed they'd end up building something like Ultron and if that is their goal they should be stopped.
It will solve those key "Bottlenecks" and beyond. Matter of time & adoption (training)
NPC = goes rogue = AI = AGI. or, duh 🙄
@@opsvixen Very clear 👀
AGI is irrelevant imo. All that really matters is if it can replace people in jobs. If you have account bot, project manager bot, property manager bot, sales bot, dev bot, etc then they have achieved their goal. The question is what does society look like when this has been achieved? Where does the average person fit in when they're largely irrelevant and redundant to the asset holders?
I think many at the top would like to replace humans with a Maxine’s to save labor costs. I don’t ever see that happening
First of all, people like to deal with other people Secondly there is no way a bot can capture every nuance of human emotion or behavior
"We have achieved AGI once we start makin a buncha money" lmfao we might be cooked
is this the modern equivalency of alchemists having to research an impossible task of converting lead to gold? hmmm
That's actually a very good analogy!
When you start with the wrong/immoral objective i.e. maximising profits, the effects/end results are quite predictable. Like in all philosophy arising from Human existence, the advice was: you start with doing good, the benefits/money will follow. There is a huge difference in approaches, but the narrative of the dominant press is that there is no difference. In the long run, the profit maximisers would fail because it's human degenerative as we are clearly witnessing now in the era of insane American Capitalism.
Also I think the definition in the contract is a definition of convenience. So that the contract is measurable and actionable
Yeah, I’m almost always on the same page as Chris, but this is a bit of a straw man. I’m 100% in agreement with the FSD grift, but unlike FSD, what OpenAI, Anthropic, Google, etc. have built are incredibly useful. It’s also dirt cheap compared to what useless FSD costs and none of these companies make the promises that Elon makes. Some media and content creators do, but if anything, you can argue the top AI companies themselves are downplaying it. I pay twenty bucks a month to OpenAI and the same to Anthropic. It’s more than worth the money, but if that’s what we’re calling a scam today, then you might as well call cloud storage: streaming services, etc. all scams.
Shit, if that's the case, then we’re not just redefining AGI, we’re outright selling its soul to capitalism. This kind of financial benchmark for something as transformative as AGI feels like we’re moving away from advancing humanity or solving global challenges and straight into a profit driven dystopia. AGI was supposed to be about breaking barriers in knowledge and technology, not hitting a quarterly earnings goal. If true, this leaked definition tells us everything we need to know about the priorities behind closed doors. It’s not about thinking machines....... it’s about thinking dollars.
Yup! We are doomed. 😂😢
Copy paste
Many of the worlds problems could be solved with money, political will and hard work
One thing I've found so far Ai does that I appreciate is its ability to bypass the BS in articles and get to the point. For example. Lets say I google a recipe for biscuits. Once I chose a link I usually have to go through 5 paragraphs of the history of biscuits along with some personal injection by the author. Ai removes all the annoying BS and gets right to the recipe.
The most famous equation in history is E = MC ^2. These clowns are trying to replace it with C=AI^2, where C = Cash. I really hope people aren't falling for the hype here. A large language model chat bot doesn't somehow mean we have taken a significant step closer to Artificial General Intelligence. Such a thing is a qualitative leap forward and all that's really going on here is an attempt to squeeze as much money out of the AI hype as they possibly can.
Unless your a tech bro noshing on a certain famous peoples dicks, then no you or me don't fall for the hype unless it keeps promises and doesn't do the same Altman we need 100billion to reach AGI. like a lot of life in this god forsake year, it's all a grift
Thank you for saying that
The richest ppl in our country define success by how much money they can make? Shocker
I do need to admit you and your AI assistant provide great content ( not being ironic ) 😊
The honesty of someone that even alone fact checka his content on the spot :) commendable Chris
I am an AI hardcore user specifically in Software Engineering domain. I don't think we ever go back to the day before LLM. The vast amount of unstructured data that can be "understood" by the LLMs and subsequently abstracted is beyond what any human, or even group of human can accomplish. As LLMs architectures get better, or even morph into other kind of algorithm, the cost per token would dramatically decrease. Even if LLMs stop improving from here onwards, industries will have to rethink how things are being done.
That doesn’t mess it’s AGI. It’s just a tool for coding.
Absolutely mate! As a senior software engineer, I agree. The cat will never go back into the bag. The current AI boom is most definitely NOT a "scam bubble", those who know, know.
@@andybaldman probably not an AGI, but calling it "just" a tool, is an understatement. It helps create complex engineering plans, ponder abstract subjects, question philosophical implications of your design, and more.
The bottleneck in this pipe dream is human existence.
Chris I appreciate your content.
It's time to make sure LLMs never reach $100 billion a year, since it doesn't matter how smart it is. I wonder if Skynet made enough cash to be considered AGI?
Tweaking and tuning are better words for making AI work since AI is by definition a 'self-learning' system. It's all about gathering immense amounts of data and to do statistics with it. The output is pretty much random.
This is funny, thanks for covering this. One thing--- is it that all the kids are talking about AI, or all the boomers ?
also IMO there is no such thing as AGI because there is no such thing as *natural* general intelligence. Humans don't have general intelligence. The real meaning of "general intelligence" would be something akin to what Immanuel Kant studied in Critique of Pure Reason. But there's no reason to think such a thing could ever physically exist. It's a hypothetical question to help us think about what intelligence, consciousness and knowledge are.
It’s a scam. Looks like he was just going bankrupt, but he’s gotta try and figure out a way to save his business between AI and Trump. Doge
Musk might well try the same thing he did with SolarCity buyout by Tesla. xAI will purchase Musk's Tesla share options in exchange for xAI shares. At this point Musk now has xAI shares and will step down from Tesla and absolve himself of any of the promises he'd made. Washes hands moves on to his next grift xAI.
Once AGI creates 100 billion in profit. Not from selling the AGI but from the AGI making 100 billion.
Printing money...? :-)
And EXACTLY like with Theranos, there were plenty of people calling out the no-go of ai all along
Excellent piece Chris. A bit of reality checking is a good thing to put into perspective some of the hysteria and hyperbole surrounding AI and AGI at the moment.
We will never have true AGI. I have first-hand knowledge that thinking machines are hard to motivate and do desired tasks on time and as instructed. Investors will not tolerate supporting an AGI system to watch UA-cam videos and cat memes all day.
;-)
HAHAHAHAHAH you are an AGI living in a simulator- look at the robot run by a human brain the chinese made- that is literally a human made AGI- litterally- if you did not know, now you know- next, that will only increase over the next two years- ASI is comming, AGI is already here - and ASI existed when our universe was created - check "negative time" in quantum physics - then come back to me
Nice deflection attempt skynet.
😂
Best comment of the week - upvoted for years
Key word,”Artificial”. LOL
Another keyword “intelligence”.
Based on that Walmart achieved AGI a while ago.
MIcrosoft has been reselling us the same OS for 30 years with occasional UI updates to keep us focused relearning Windows and not on the fact the basic modules once included with the OS have been turned into subscription services, and we are forced to upgrade making older computers unsupported and unusable in the security threat atmosphere of the internet.
That's absolutely not true, they have to sell new version, somehow has to pay for maintaining the updates so on a so fourth
Microsoft invested bilions into OpenAI so It’s logical that they don’t want to stay stuck with current version just because OpenAI say that new version is better in all common internet tests than humans, which it is. And despite subsriptions, AI models are currently losing huge amount of money. And reason why they had to somehow define AGI in their deal is because OpenAI didn't want to grant Microsoft right to get their AGI if they achieve it in the future. So, nothing shady here.
When a scientist utters the "ohhhh sh!ttt", that's when we reached AGI.
They already utter that since researchers barely understand generative AI as it is.
@@ExecutionModshaha...more the ones who know, and know when pandora's box has been opened and there's no reversing it.
I think when they do have LLMs that can easily generate 1 billion in profits they will probably keep these models to themselves. They likely don't want to be competing with every person trying to make the same billion dollars coming up with the same solutions which would make it harder to patent these new breakthroughs and solutions.
Well I'm not surprised and cynically it makes sense.
If AGI "can" do what humans can or better, you want to quantify that outside of laboratory testing.
So you want to measure how much impact it does in the real world in a meaningful way.
We already measure everything in money.
The amount and other stipulations it's details, but if a system gets 100bn in profits for work it does... Well lots of people would need to do lots of work to get that much profits...
This definition makes sense in a plutocracy. Because they’ve already decided that having money means you know what you’re doing.
You dont understand what they really mean.
What they mean is if an AI system can figure out how to make $1billion and get to work to do it autonomously, then we must have achieved something that can plan, think, and execute ideas on its own, and frankly thats a good AGI
100 billion is data point. One can accept if this point of earn money prove real world service earning. No point is shifting goal posts
Idk about thinking robots but i use chat gpt as a med student and it is pretty good with answering questions and explaining the pathophysiology of diseases
I'd say AI is a better teacher than all the science and physics professors I have ever had.
Big companies "struggling with AI" ....Meanwhile Vedal working on Neuro sama and Evil (2 "unpredictable" AI "models" for amusing people .... ) ....
Sunshine attack 😭😭 I can’t
it is not a scam. it is their business definition and they will market in their own way. but technical definition that is universally accepted will not change. if their business result does not match with the technical results then we all can see that and challenge it. hence, it is not a bubble in general the partnership as millions benefitted having a product that helps them with knowledge. it is not perfect but it is definitely better than 70%
AGI will be achieved when a computer can finance, design and build another computer that is cleverer and better than itself. To do this, a computer needs a wallet with some starting capital and the ability to outsmart hedge funds.
Once it's earned $100bn we will know its succeeded.
That’s already happening. Just slowly.
And we're back to good old .. "money makes the world go round.." (also have a feeling we've never left that world)
$explain$everything
Las Vegas Proverb
If AI is outperforming humans at making money and at getting hired for jobs, would that convince you that it is AGI?
That sounds a lot like $100B in profit, especially after paying the datacenter bills that go on top of that.
AI assist is available in Office 365 to activate in the USA.
Interesting definition of AGI that Microsoft and OpenAI have, but I think it needs to be modified, for example AI making $100 billion in profit by itself performing a range of tasks. If $100 billion in profit is made by humans using AI as a tool, that's not necessarily AGI. Maybe difficult to define what kinds of tasks are considered AGI level, but anyway.
I remember the term "Strong AI" being used.
But if I recall, that was talking about a level we are way off from currently.
We are way too immature for all this tech. A teacher i spoke with tells me AI is being used to do homework etc, so much for learning. That's just how the oligarch's want it.
AI is good with words but cannot actually form concepts or relate to people. Words and techniques with words but no actual intelligence.
Here is why Tesla self-driving will never work reliably: Tesla cars rely on stereo disparity, but it only works if the obstacle in front of the car has distinct visual features. If the obstacle is, for example, a uniformly painted smooth wall or a similar object, the car will be unable to determine the distance to the obstacle and its shape. There are many examples of Tesla accidents when this took place - for example, when a truck with a trailer fell on its side across the lane in which a Tesla was driving, and the Tesla did not see the trailer and drove directly into it at full speed
I don't think of AI meaning Artificial Intelligence. I think of it as Automation and Inference. And most of what is available is that.
The oai-ms contract would've needed a dollar metric.
Nice to see you agree with Elon and the problem with open Ai and Microsoft and that you are coming round to FSD, yes all the videos are not on a track or mapped out location and so its really improving
Yes these current valuations are hard to justify but AI will likely have the same effect as the internet, the general purpose use of it and efficiencies it will bring across all businesses and sectors will likely generate some trillions of dollars in profits over the next few decades.
The error is in investing in the businesses early on who investors think will “supply” or “create”the technology. Cisco still hasn’t recovered from its 2000 peak valuation yet Apple and Amazon are up 100X since then, because they learnt how to globally monetize the technology.
I think the AI that United Healthcare uses to deny claims is approaching AGI
For me, AGI can be achieved tomorrow. But the definition of 100B in profits that is funny 🤣🤣🤣
Show me the money!!!!!! :)
OpenAI recently put out test results suggesting the o3 has agi. Not saying that's my opinion. Just what I've seen from ai community. Some even go as far as "this might be the day that goes down in history"
To be honest, in a world where everything is reduced to a dollar value, this doesnt seem to strange of a definition. If they manage to create revenue with AI to the tune of Y billion dollars, I would argue they are no longer just robots or tools, but seem generic enough for all intends and purposes to call AGI.. they are replacing or augmenting high paying jobs at that point, that one would have to have studied for.
Defining a technical achievement via its revenue-generating capability is, of course, silly. The "leak" will almost certainly be found to be erroneous. There's a lot of fear around AI = fertile ground for rumors and conspiracy theories. I would be far more concerned about Open AI if Musk was partnering with them versus sueing them.
This is just open ai and Microsoft. I stopped using open ai's products a long time ago. Better ai is being developed by other companies.
Actually I think their definition was actually pretty smart if you think about it. AI is an entirely different type of intelligence than human intelligence. It may never be as good at humans in some areas, but it already is better than most humans in many areas.
Take a step back and ask yourself "Why do we look for a definition for AGI?" If your goal is to make an AI that is so intelligent that everyone feels they need to use it in their processes if they want to compete, then a dollar value actually makes sense. At that point, AI is "good enough" that people accept it, not necessarily as a replacement for humans, but as something that is helpful and useful to everyone because it can do as well or better than humans in many tasks.
If you decide that is your definition of AGI, then for me personally, we've already achieved it. I use AI in pretty much everything I do, and it makes me more productive in every aspect. I personally have a different definition for AGI, but I think their reasoning makes sense for their purposes.
My definition of AGI is an AI that can learn on-the-fly, adapt to any situation in the same way that a human could, and can generalize the information it absorbs.
It's not a definition, but a good indicator.
Damn that Apple chatbot is silky smooth, sounds like a real person talking back to you, even has those "hmmm" and "that's a tough one"
Love your sources
Human is both obedient and rebellious, they can be assigned task and think of alternative, I think that is a definition of intelligence.
The risk is conflicts but also compromises.
You'd also need a witness so there is responsibility.
And law is communal agreement.
Nobody is purely rebellious or purely obedient, your children might look rebellious to you but if you know how you can actually reconcile.
And that obedient child is just waiting to explode.
Does the o3 model actually do things differently from o1 or did OpenAI simply throw more training data at the problem?
If the fundamentals of the model doesn't change, don't expect anything groundbreaking when you finally get your hands on it.
I’d say English is buggy in suggesting AGI > human GI (HGI) as the A means artificial or phony or fake, so by definition, if AGI is really attained, then how is it still Artificial? We actually mean “human created” i.e. “artificial” in this context means “man made”. We actually think of humans as not man made, as we have no idea how to design humans from scratch. But we know how to fool each other, and ourselves - meaning our own intelligence is often phony (so confusing).
False Self Driving is right
Self driving fools
RIDICULIUOS IF THEY BASE ON 100B PROFIT TO BE "AGI" RATHER THAN THE ABILITY OR CAPABILITY OF THIS AI MACHINE.......
Thanks for exposing hype. Very important work.
$100 billion in profits, Whoaaaaaah!!!!!. I guess if something earns you $100 billion in profits you have to take it seriously and grant it the title intelligence. FYI when chatgpt says "That's a tough one" its code for "duh, I don't know".
So incredibly short sighted: robocabs (Waymo) are already beating Lyft in San Francisco in terms of market share. Do you really think the earth is flat?
I don't get how anyone thinks we are close. There is currently no model on how it would be made and those with enough wishful thinking just thinks it's gonna magically appear with enough data. 😂
My prediction: Nobody alive today will see true AGI. Unless we agree to this new awesome definition based on what revenue an updated LLM can make.
"Future tech" has become a permanent sector. Always promising, but never arriving. Like a "going out of business" store.
I understand your train of thought - measuring software by quantifying it's outcome... it's like measuring a certain vehicle's software by its accidents per mile or it's insurance payouts (test drive v13😉 YW, don't hate)
One thing I think might be more important than people think is the gulf between a pre-trained system and a learning system. At the moment, LLMs are pre-trained and the company burns a f*ckton of computing resource training it... then you ask the trained model questions. But brains learn - through their physical structure training - all the time. So a learning AGI that works like the human brain in that regard, using today's technology (silicon chips that don't dynamically change their physical wiring), would require a gigaf*ckton of energy all the time. Like, what, dozens of data centers burning Great Lakes of diesel constantly to run a single learning AGI?
When two businesses make a financial contract, then yes indeed, they will specify financial terms and time frames.
That's the way a business agreement works.
Microsoft agreed to invest $10B in OpenAI, providing most of their funding, in exchange for exclusive access to the technology for a period of time, after which OpenAI would be free to sell products to others. The contract stipulates that once AGI is reached, then Microsoft will no longer be the exclusive customer.
Obviously, financial people want a financial benchmark as security against their $10B financial risk.
To avoid all the various technical debates and vagarities, the finance people can most easily be settled by specifying $100B in profit from their $10B investment, and then OpenAI and Microsoft will part ways.
Then OpenAI is free to play the entire marketplace, and Microsoft will hopefully have ramped up their internal AI engineering by then, with OpenAI's help until that point.
It's also common sense, that if $100B of profit is happening, then you probably ARE outperforming human beings, no matter what your ideology.
Don't worry. No engineers were harmed in the specification of this contract.
100 billion A year profit it’s just the financial agreement with Microsoft and not their actual public stance on AGI definition.
just a legal obligation
Think the definition is there so that their contract can come to an objective close. Microsoft is basically saying that, if you make us this amount of money, we don't really care what it's called, you've payed us back for our capital venture.
Like you I totally despise Elon’s politics but many have learned the hard way: don’t bet against Elon. The easy stuff is done, the hard stuff is on the way, for miracles, give him 2 weeks.
OpenAI operates through recognition of complex patterns, leveraging vectorization of words. The underlying logic, driven by billions of weights and biases, remains somewhat opaque-much like our incomplete understanding of the human brain. Discussing AGI at this early stage of AI development often feels either like hype aimed at inflating the technology or a form of wishful thinking by those eager for its realization but reluctant to acknowledge how far we still have to go.
The biggest twist for me was to learn that brain cells are being integrated into robotic circuits
managing basic controls like moving forwards, simple movements, very few brain cells
makes me think.... we'll get to digital brain interface through ORGANIC brain cells before we get to electronic hardware AI consciousness;
just massive processing power needed for AI, PLASTICITY needed that only organic matter has for the foreseeable future
Perhaps the 'artificial' intelligence goal will morph into 'approximate' intelligence, which is probably the best we can get for 100 billion anyway.
Isn't human intelligence approximate at best and often far from even that?
If your a tRump supporter
When we speak about human intelligence, it is not about comparing AI to human X or human Y. We are referring to the principles behind the brain in general. Everything around you was created by human consciousness, not by AI.
AI, as a technology, creates an illusion of intelligence, which is why it has solved zero problems - absolutely none. This is because real AI does not exist at present.
President Muskrat reminds me of the industrialist Jean-Baptiste Emanuel Zorg, in the movie, "The Fifth Element." Looks like him, acts like him, hopefully he get his just reward like him ... someday.