@@abhijeethvijayakumar6513 Hey, one video idea failed a bit, since I grew unhappy with the research papers I was trying to cover. That, plus a new startup I am co-founding, plus the fact I am having a tropical workation for a few months and only have a laptop with me. I should be back on track in early 2025!
I work in AI, but mostly doing implementations. So objectively for me AI is a tool. Configure the grounding data, construct prompt, call API, then do something with the data, usually just throw it in a string and push to the user. I'm sure you are already disappointed. The reason I can do this is because AI will deliver 100% exactly the same result every single time, just like a basic math like min or abs might. From the beginning we have established knowing or for some not knowing that the goal of AI research is to make an intelligent that is as human like as possible. That's the targeted outcome. Think of this progression as being like increasing resolution images\video until we get a quality that is virtually indecipherable from reality. 8k image density might be just that for the human eye for example, yet, we know that we are seeking an informational density in the pixels which make it impossible to tell the difference between reality and image. Yet, when we look at modern LLM's we somehow lose track of the development history o selectively targeting the model which creates the most perfect *simulation* of what an intelligent person sounds like. Yet, now we are surprised when people actually start questioning the resolution of AI's simulation of sounding like a human, the goal remember, and whether the simulation is conscious or not. Nobody is game to ask if an 8k image is the 'thing' the image is of or not because it makes you sound like you have a stone age primitive mind. While this debate rages on as to whether the map is the terrain, I'll continue using this tool to replace other pieces of code that are too rigid with textual inputs with LLM inputs. So, while I'm not agreeing or disagreeing with the humans are different position, I will suggest that making a probability dictionary simulation sand saying it's 'conscious' is completely up to the individual. But, if you do, I'll also expect you not want to be photographed less it steals you soul ;)
I probably did not make it very clear myself, I wanted to clarify. Regarding the question how AI will actually get conscious (or reach AGI, at this point this might be synonymous to some people). I think it's not gonna be LLMs, at least not current gen. Whatever you use them now for, current gen LLMs certainly feel like a tool. As you say, you might lower the temperature to zero - and get the same result very reliably. Then it's certainly just a tool. But the debate is important for whatever the next (or next after next) gen will bring to us. We're just scoping the terrain of the Future, without properly seeing all the details in it. And this debate is going on while new discoveries are made seemingly every month. Which makes it all the more fascinating!
@@FutureIsAmazing569 I Agree BTW. Personally I think we will achieve "consciousness" by whatever definition in a way that's really, really debatable, similar to the debate right now, but, I think that non LLM form will leave the two sides in a more philosophical position. Similar to the 'doesn't have a soul so doesn't count' position some people have, and which you correctly picked up on by targeting the Penrose position. I actually think Penrose BTW is more right than wrong and that at some point we will find something 'intrinsically different' about humans and I have no basis by which I can explain why I believe this in a way that's rationally acceptable to anyone else. However I think that research on human consciousness around the death event' is more likely to help bridge the connection. So, while I fully agree with the 'Meat\silicon' computer argument I also think that science just hasn't found the missing pieces yet. Like with cosmological physics where we invented a model that 90%+ non existent to balance a model that's probably not right I think we'll keep discovering things in the Penrose space.
good balance
👍👍👍👍
i've always found it fascinating that people held this tenant.
@@glamdrag well it’s kinda natural and intuitive for anyone ascribing to scientific materialism. I think I first heard it from Sam Harris
Hello..
its been a month now ,no recent videos been uploàded.
Is there any issue bro?
@@abhijeethvijayakumar6513 Hey, one video idea failed a bit, since I grew unhappy with the research papers I was trying to cover. That, plus a new startup I am co-founding, plus the fact I am having a tropical workation for a few months and only have a laptop with me. I should be back on track in early 2025!
I work in AI, but mostly doing implementations. So objectively for me AI is a tool. Configure the grounding data, construct prompt, call API, then do something with the data, usually just throw it in a string and push to the user. I'm sure you are already disappointed.
The reason I can do this is because AI will deliver 100% exactly the same result every single time, just like a basic math like min or abs might.
From the beginning we have established knowing or for some not knowing that the goal of AI research is to make an intelligent that is as human like as possible. That's the targeted outcome. Think of this progression as being like increasing resolution images\video until we get a quality that is virtually indecipherable from reality. 8k image density might be just that for the human eye for example, yet, we know that we are seeking an informational density in the pixels which make it impossible to tell the difference between reality and image.
Yet, when we look at modern LLM's we somehow lose track of the development history o selectively targeting the model which creates the most perfect *simulation* of what an intelligent person sounds like.
Yet, now we are surprised when people actually start questioning the resolution of AI's simulation of sounding like a human, the goal remember, and whether the simulation is conscious or not. Nobody is game to ask if an 8k image is the 'thing' the image is of or not because it makes you sound like you have a stone age primitive mind.
While this debate rages on as to whether the map is the terrain, I'll continue using this tool to replace other pieces of code that are too rigid with textual inputs with LLM inputs.
So, while I'm not agreeing or disagreeing with the humans are different position, I will suggest that making a probability dictionary simulation sand saying it's 'conscious' is completely up to the individual. But, if you do, I'll also expect you not want to be photographed less it steals you soul ;)
I probably did not make it very clear myself, I wanted to clarify. Regarding the question how AI will actually get conscious (or reach AGI, at this point this might be synonymous to some people). I think it's not gonna be LLMs, at least not current gen.
Whatever you use them now for, current gen LLMs certainly feel like a tool. As you say, you might lower the temperature to zero - and get the same result very reliably. Then it's certainly just a tool.
But the debate is important for whatever the next (or next after next) gen will bring to us. We're just scoping the terrain of the Future, without properly seeing all the details in it. And this debate is going on while new discoveries are made seemingly every month. Which makes it all the more fascinating!
@@FutureIsAmazing569 I Agree BTW. Personally I think we will achieve "consciousness" by whatever definition in a way that's really, really debatable, similar to the debate right now, but, I think that non LLM form will leave the two sides in a more philosophical position. Similar to the 'doesn't have a soul so doesn't count' position some people have, and which you correctly picked up on by targeting the Penrose position.
I actually think Penrose BTW is more right than wrong and that at some point we will find something 'intrinsically different' about humans and I have no basis by which I can explain why I believe this in a way that's rationally acceptable to anyone else. However I think that research on human consciousness around the death event' is more likely to help bridge the connection.
So, while I fully agree with the 'Meat\silicon' computer argument I also think that science just hasn't found the missing pieces yet. Like with cosmological physics where we invented a model that 90%+ non existent to balance a model that's probably not right I think we'll keep discovering things in the Penrose space.