Inference cost is expected to fall dramatically with specialized hardware accelerators, e.g., Cerebras. Since the current bet on improved LLM reasoning is increasing test-time compute (i.e., inference as opposed to traditional training), the costs will certainly be lower in 2025.
It depends what you ask. If I ask how to prevent our planet from overheating and get aconfident answer, then $1,000 vs $10,000,000,000,000, then certainly I will pay.
I agree with you, Nate. The invention of the airplane was a big deal some time ago, and now we use it when needed. The same is true for AI. We will use it at the right level when required. It is unfortunate to call this technology "Artificial Intelligence (AI)" since the 50s/60s, as this is like any other technology that helps humans in their lives when needed.
The second point is about 'general' intelligence. The word general is supposed to be everything in some way. I get someone is saying that General Intelligence would be how humans engage with the whole of all reality we live in. But that whole is a bit understated by processing a big database. The database might be text, or movies which the AI processes all the database as in the whole. But the 'General' lacks meaning. For example processing movies implies a 'realism' that is missing from a movie that humans see in real life. So the idea of a general intelligence ignores what it means to process non real data as a claim to be in the real world. In other words before we even had language we were already doing stone tools for real. How does a massive database point anything to do stone tools? Referring back to my first observation 'before language we made tools' how does AI lead to a spontaneous or indeterminate solution? Further how is a movie language like? That question implies an understanding from what we inside our brains go from say the perception of seeing to words. That specific semantic understanding is what?
I don't know what safety concerns you are referring to in a scientific sense. My point is science is a realism structure, so safety is some sort of reference to knowing what could go wrong. The claim to a 'general' intelligence is about how the whole of the communication structure is real. A human engages with the real world and that world determines the 'safety' like tripping, or cars rushing by etc. in what sense are for example, AI generated texts or spoken words attached to reality? Hence what is 'safe'?
Even if o3 is AGI which I doubt, it will still hallucinate. As Nate suggests these models are very expensive, you might say well you hire a human you have to pay a dev what 100k a year and other benefits like innsaurnce. But 1k per task adds up and you will go over 100k real quick why not hire an Oxygen breathing human that can do the same task for longer periods of time WITHOUT hallucinating making up random shit. people might lie but there reasons and they don't make up random stuff that never happened unless it benefits them so a dev won't randomly start coding in a new coding language that's never been seen before. AGI? not really sure not even close not until AI has a MAJOR breakthrough on the structure of it. Still exciting stuff for sure. Also why did open ai skip to o3? where o2? or is O1 pro really o2 but never said? Where the fuck is o2 at they skip a whole ass generation I think to generate buzz sadly
Other than allowing OpenAI. to escape their contract with Microsoft, I don't see the need for AGI. What we will eventually need are highly specialized, fast, and inexpensive models for specific tasks. Why would anyone want to pay o3 prices. Even when prices decrease, the goal will still be efficiency.
For a $1,000 bucks/msg I'd probably spend 6mos planning and refining my prompt, lol. As far as using o3 or agi to approach humanities problems, we can't even agree on what the problems are. Still, are we mature enough to ask the question, are we ready for this power?
Inference cost is expected to fall dramatically with specialized hardware accelerators, e.g., Cerebras. Since the current bet on improved LLM reasoning is increasing test-time compute (i.e., inference as opposed to traditional training), the costs will certainly be lower in 2025.
I mean it's a safe bet that costs will come down but they are going to keep building pricier intel and still the intel may not be necessary
It depends what you ask. If I ask how to prevent our planet from overheating and get aconfident answer, then $1,000 vs $10,000,000,000,000, then certainly I will pay.
I agree with you, Nate. The invention of the airplane was a big deal some time ago, and now we use it when needed. The same is true for AI. We will use it at the right level when required. It is unfortunate to call this technology "Artificial Intelligence (AI)" since the 50s/60s, as this is like any other technology that helps humans in their lives when needed.
If the answer is anything more valuable than $1000 then yes.
The second point is about 'general' intelligence. The word general is supposed to be everything in some way. I get someone is saying that General Intelligence would be how humans engage with the whole of all reality we live in. But that whole is a bit understated by processing a big database. The database might be text, or movies which the AI processes all the database as in the whole. But the 'General' lacks meaning. For example processing movies implies a 'realism' that is missing from a movie that humans see in real life. So the idea of a general intelligence ignores what it means to process non real data as a claim to be in the real world. In other words before we even had language we were already doing stone tools for real. How does a massive database point anything to do stone tools? Referring back to my first observation 'before language we made tools' how does AI lead to a spontaneous or indeterminate solution? Further how is a movie language like? That question implies an understanding from what we inside our brains go from say the perception of seeing to words. That specific semantic understanding is what?
I don't know what safety concerns you are referring to in a scientific sense. My point is science is a realism structure, so safety is some sort of reference to knowing what could go wrong. The claim to a 'general' intelligence is about how the whole of the communication structure is real. A human engages with the real world and that world determines the 'safety' like tripping, or cars rushing by etc. in what sense are for example, AI generated texts or spoken words attached to reality? Hence what is 'safe'?
Even if o3 is AGI which I doubt, it will still hallucinate. As Nate suggests these models are very expensive, you might say well you hire a human you have to pay a dev what 100k a year and other benefits like innsaurnce. But 1k per task adds up and you will go over 100k real quick why not hire an Oxygen breathing human that can do the same task for longer periods of time WITHOUT hallucinating making up random shit. people might lie but there reasons and they don't make up random stuff that never happened unless it benefits them so a dev won't randomly start coding in a new coding language that's never been seen before. AGI? not really sure not even close not until AI has a MAJOR breakthrough on the structure of it. Still exciting stuff for sure.
Also why did open ai skip to o3? where o2? or is O1 pro really o2 but never said? Where the fuck is o2 at they skip a whole ass generation I think to generate buzz sadly
Other than allowing OpenAI. to escape their contract with Microsoft, I don't see the need for AGI. What we will eventually need are highly specialized, fast, and inexpensive models for specific tasks. Why would anyone want to pay o3 prices. Even when prices decrease, the goal will still be efficiency.
I think we they skipped o2 bc of a naming issue with a UK company
For a $1,000 bucks/msg I'd probably spend 6mos planning and refining my prompt, lol. As far as using o3 or agi to approach humanities problems, we can't even agree on what the problems are. Still, are we mature enough to ask the question, are we ready for this power?
Not "nearing".
We have AGI.