You are not a nerd,you are something pretty special my friend and your venture is one which will be valued and i myself am a mother of two beautiful souls whom will see this universe with science,nature,feeling,awarness and dust free eyes.....and you are already a big part of the great explorers of our amazing world with ourselves placed xxx step your steps sweet warrior and believe in you.... one love seedance xx
Society requires an individual to obtain a business degree in order to be in management, therefore a business degree should for the betterment of society and the environment. For us to be intelligent , focusing on ethical and moral practices will not only positively contribute to society and the environment but also there is a high potential for large profit long term. Type 1.
Dr. Michio Kaku is probably after Albert E. and Stephan Hawking the best physicsist of the last 100 years, his theory's make so much sense and his idea's are so revolutionary that for me personally he is the best physicsist.
this host said his dad takes a long time to walk across a room. His dad was watching this broadcast at home, so proud of his son with the checkered shirt, but now with a single tear trickling down his face.
the fact is that the greatest most untapped resource is human ingenuity. Our capasity to create whatever we can imagine, seems endless, and Mr. Kaku just doesn't seem to believe that. Pah... Some genius he is.
@gradyiscool thanks for taking the time to explain this. It all comes down to computers think faster than us but cannot problem solve better than us. It is much harder for a computer to work out a solution to a situation unless a human pre programs it to do so first. that is the point.
michio kaku is incredible. the only trouble with artificial intelligence though is that we don't know that it doesn't require something more. we are only just now creating lifelike programs in computers, and it took billions of years to get intelligent life through evolution. plus, we are our only specimens for intelligence, we don't have anything to compare ourselves to to make generalizations. it is my opinion that free will and human thought cannot be described by current physics alone
That's the view I'm taking on the future of robotics and A.I., as an undergrad researcher in the field. Rodney Brooks of MIT, a very well known researcher in this field, also expresses a view similar to yours. The quote from him that I have is 'the distinction between us and robots is going to disappear', from 2002.
Computers are opening the doors of learning and exploration which we could never have dreamed of even 50 years ago. These discoveries will be used to better all of our lives. They go towards providing ample food, cures for diseases, more efficient uses of land and fuels, conservation, weather prediction and warning, increased manufacturing, entertainment, etc... etc... Yes, there is also the potential for abuse and misuse - but the benefits of technology far outweigh the risks.
Michio Kaku (b. January 24, 1947) is an American theoretical physicist, specializing in string field theory, and a futurist. He is a popularizer of science, host of two radio programs, and a best-selling author.
I understand it quite well. Let me rephrase that last comment. Memory is the only part that significantly changes. It is important, because that is where your brain stores all of it's data that you seem to think come from the brain rewiring itself. Now, the brain does simplify itself; when you are young, you have far more synapses than when you are old because your brain simply removes ones that are useless or rarely used.
Questions: 1: what about quantum tunneling? What happens. When one of the atoms working in a quantum computer quantum tunnels? 2: how can quantum computers be stable when the vibrations from a car on a highway a long way away can mess it up? 3: for AI, could we not (in a long time of course) map a human brain synapse for synapse, and simulate it in a computer, while the computer calculates using the simulation. It is incredibly inefficient but we would still have super fast CPUs for # crunching.
Not to speak bad of an excellent public physicist. Moore's Law is only a section of the greater scale. Yes, in about 20 or so years the Silicon based computer will be at it's utmost limitation, but we've always managed to replace it in the past from vacuums to magnetic disk to Flash drives. We will without a doubt do the same again.
we are far greater within ourselves than we will ever know and so no machine or pc could ever match the human mind shape and form... accepting who we are and where we are placed first are the steps we have to step first....
Moor's Law is "# of transistors on an integrated circuit for minimum component cost doubles every 24 months", meaning that in 2 years you can put 2X as many transistors on the same area of silicon. When you shrink the transistor you can get higher clock speeds, but look at the new Core 2 vs Pentium D. All tests show that a 3.0GHZ Core 2 is 60% more powerful than a 3.0 GHZ Pentium using the same kind of the transistors. How we wire a chip is more important than how small a transistor is.
Let me make this a bit clearer. A computer program typically does not change itself; a learning program will simply take advantage of the fact that it can store data in files on the hard drive to "remember" how to do something. Now there is one main part of the brain that studies show does change and grow; the hippocampus, the part of the brain believed to manage short term and long term memories. The brain is simply a circuit that adapts without changing it's circuitry outside of it's memory.
I didn't say it completely changes it's circuitry, but it's ability to change its own circuitry is what makes it functional. What you call its "specific function" is an orchestra of functions working together to simultaneously think, remember and act among many other things. "Hardwired" implies a specific purpose. The brain has no purpose, only use. It's purpose becomes whatever it does which is never a specific thing.
They're also working on 32-nanometer chips that should be on the market in 2009. Moore's Law has more steam left in it than most people realize. Also, IBM's RoadRunner supercomputer has broken the 1 petaflop barrier. Most estimates put human brain capacity around 20 Petaflops. It took 20 years to go from one MFlop to a GFlop. 13 years to grom GFlop to TFlop. 10 years to go from TFlop to PFlop. We could see ExaFlops speeds (the far end of the proposed brain power spectrum) by 2017 or earlier.
@cmxsevenfoldxmc It was pretty good. He's very knowledgable of his craft, of course. My only complaint was that the class was over 100 students in a gigantic lecture hall, so there was no interpersonal connections. Otherwise, absolutely awesome. He's considered one of the best professors on campus. He always ends class with videos...usually with himself in them. LOL.
1. If an organism evolves and stops having a specific feature or something else controlled by it's DNA, it doesn't mean it no longer has the DNA for it, it's just never expressed anymore. That's what most of our DNA is. Leftover data. That's what I meant. 2. The brain takes a very long time to change itself, if it does at all. It is most likely hardwired (which computers are not) but is just good at adapting without changing it's circuitry. In fact, I'm trying to train a neural net to do this.
He really is right about that. A butterfly is much smarter than an Asimo and they do it with just about 200,000 neurons, whereas Asimo uses billions of transistors. Biological brains are quite ingeniously designed, elegant and efficient.
He also believes that every possible alternative-history exists right now, in the same space we're in but in a different dimension. He's actually said there are dinosaurs in our houses right now (and everything else), but we can't see them because they're in a different dimension. Wrap your heads around THAT, people.
He's a professor of "Theoretical Physics". He really knows a lot of what he speaks about. If it's theory, he'll say it's theory. Besides, we're not honestly too far from "Star Trek" stuff. I don't watch it personally, but I follow Physics and Quantum Mechanics. And a lot of professors in the field will agree with everything he said.
I've done my research. 1. Very little of the genome is actually used. Literally-about 97% of it is never used, at least not for making proteins. 2. I did do some additional research since I posted that comment. Almost none of the brain does processing. Most of it (about everything more than a few millimeters deep) is like wiring; it only sends signals from one part to another and does no processing. Still, this means that a computer version of a brain can be reduced a lot.
now i dont know about that, that's like doing a great job at work and then your boss takes the credit for it. the scientist who proves it -or his team if that's the case- should make the announcement. but Dr. Kaku of course deserves an enormous amount of respect and credit for being such a pioneer in his field. he's a brilliant man.
I understand it quite well. You just have no idea what I'm talking about. I've repeated myself three or four times at least. The brain is hardwired to run a specific task. It takes information from it's memory and the body's sensory organs to either change it's memory to cause the system to operate differently (because different inputs mean different outputs, regardless of the function), or creates a response to the external stimuli. It doesn't completely change it's circuitry.
the evolution of intelligent is from SIMPLE to COMPLEX and back to SIMPLE again. I got this from my own original thinking but after watching this, when Mr. Kakua said Quantum cycle, it reinforce my thinking. BTW watch "The Fantastic Planet"
He is not saying that Moore's Law will end. He is saying that silicon has a physical limit so information processing must switch to a new method, perhaps quantum computing, perhaps DNA, but it can't be Silcon because there is a physical limit and because of Moore's Law we can predict when we will hit that limit.
@Casanuda I'm not entirely sure that I understand your point. For one, I was specifically -not- talking about "borg". My concept was more of an "internet in your head", where you could -choose- to collaborate with others on an idea, or communicate with others at the speed of thought. -Not- a "hive mind" consciousness where there are no individuals. I'm looking to better humanity through technology, rather than destroy it.
Roger Vogelsang created a self thinking interface device based on quantum entanglement where freed electrons in pairs, left brain right brain?, triggered a random character program. In theory because freed electrons are mysteriously connected while defying our slow mass time he was able to get the device to communicate digitally, knowing cause before effect. Possibly the closest steps towards artificial intelligence ever. AI is not in the programming but in the way the hardware is configured.
You're not thinking it would be a good thing, or you're fearing it, this is why the transition of it is so dangerous, there is no way humans could live forever on earth
Well the guy comented on that in the interview about how he thought it wasn't just about speed. I think it has alot more to do with how our brains are self productive; in that we build neurostructures as the demand increases. Efficiency is far more effective than bottlenecking all the information through one or two processors.
He made up String Theory, although people are starting to think his theory isn't true anymore. He's still a really, really smart guy though, and he knows a LOT about physics and space. He looks at logic and physics, and then pieces them together to predict the future.
It is partially true. I agree on the doubling number of transistors and partially on the inefficiency (internal traces etc.), but there are other things in development. I just happen to know that, because I work in microelectronics field and we design some of the dies. Of course, there are people who know more than you and me combined times 100. Cheers!
This is probably the smartest person on the planet. The cool thing about him is that he also writes about philosophical theories, and he accepts that physics does at some point cross over into metaphysics.
I appreciate your concern, I really do. While I do believe in the existence of a soul, or something like the soul - understand I cannot make any definitive statements about what it is or how it operates and what effect it has on me. Nor can I factor this belief into my views on the world around me simply because it is metaphysical while I occupy a physical world. In any case, I do not consider myself an automaton as I have (as well as I can perceive) free will and a cognitive consciousness.
The thing I think Mr. Kaku is not understanding about Moore's Law, is that it seems to apply to technology in all fields, not just in terms of silicon computers. Sure, Moore's law was anchored to transistors, how small they are, and how much they cost, and how powerful they are, but it seems we keep coming up with things that sustain Moore's Law. I'm sure this video was filmed before 3D chips were being developed.
"I'd say 50 to 100 years is more accurate." JOConnor313, I am quite studied in the areas surrounding Artificial Intelligence. MIT is not at all the only place that research is being done. In fact, most of the work at MIT (which I have personally seen first-hand) is focused on developing practical applications, not on theoretical research. There has been a huge amount of progress in theoretical research in the past few years.
of course there is always the opinion that there is no such thing as free will, but we are already deep into philosophy by that point, which is by definition pure conjecture
Instead of using silicon computers will be based on diamond-based computer chips. Because diamonds can conduct electricity and the processing power will go beyond what we can imagine today. Since overheating won't be a problem.
this is one of the only guys i can understand everything he says on such high intellectual topics. but then again its because he speaks in theory and not technicality. i need to go back to school :(
Yeah... about artificial intelligence I think understanding how intelligence works more exactly is what we need to know right now. The processing power that exists already is just waiting for theories to come up.
@frostheat246 Problem of A.I isn't hardware. You can throw any amount of qbits (quantum computing bits) on a given problem but if you can't define the problem, you can't solve it. Most people miss this. We simply don't know enough about the brain to model it. We can outperform it on most areas concerning speed and accuracy - that is not the problem. The problem is that we can't properly define intelligence. The understanding isn't there yet, and therefore, the software isn't there yet.
"The brain is simply a circuit that adapts without changing it's circuitry outside of it's memory." The brain depends on memory for function therefore memory is as much a part of the brain as any of these genetic circuits you worship. These integrated circuits make information more readily interpretable by the memory. Have you ever somebody alive with absolutely no memory? No, because brains don't work without. These little circuits (that all work together) merely ease processing for the memory.
The Theta waves of the universe represents the infinite intelligence of all that there is or that may be. It mannifests itself as arrays of frequencies that are in essence conduits of families of frequencies that makeup communities of families of frequencies which weave the end product. We are receptors of that intelligence that's funneled through our beings as we are a part of it all.
A.I. was a huge hit!! The film won five Saturn Awards, including Best Science Fiction Film. It was nominated for Academy Awards for Best Effects, Visual Effects and Best Music, Original Score! it had a budget of 100 million, BUT HAD A GLOBAL GROSS PROFIT OF $235,926,55 ranking 16 world wide... not a hit... that movie was so sad.
All I can say is the Idea of Artificial Intelligence is a good idea but we must be careful to not make something like Skynet from the Terminator movies. That Idea for the chip is a good one and hopefully engineers will keep it in mind.
The thing is technology HAS to eventually reach the complexity of the human mind since it's an expoch above it in evolution. The mind doesn't have any shape or form, it's just a complex interaction between chemicals that results a very strong form of information.
They say that computers will never truely understand. One thing wrong with that. In order for a computer to function, it has to understand in the first place. The problem with it all is people are trying to write it in one code/one program. It needs to be written in seperate codes/programs that intercommunicate with one another.
@NikeGolf118 Yes, but the exponential growth of computing power did not start with Moore's law. It (Moore's law) is only the latest paradigm of computing. Before that, it was transistors, before that, vacuum tubes, and before that mechanical computing. All grew exponentially in power. The Next paradigm could be quantum computers (as Michio Kaku states) or 3 dimensional nano tech based substrates. Either way, the ultimate computing potential will not be reached for quite a long time.
I took an Astronomy course with this man at CCNY...absolute legend.
I can't get enough of this Physicist, he's the best!
Wow, amazing he's holding a conversation with a tv in 2007
Tech TV was defunct in 2004, so this was at least before that.
You are not a nerd,you are something pretty special my friend and your venture is one which will be valued and i myself am a mother of two beautiful souls whom will see this universe with science,nature,feeling,awarness and dust free eyes.....and you are already a big part of the great explorers of our amazing world with ourselves placed xxx
step your steps sweet warrior and believe in you....
one love
seedance xx
i keep imagining michio kaku in the future as one of those detached heads in a jar from futurama
he was my professor for an astronomy class and i have to say he is one of the best professors out there. makes the class really interesting.
wow you're really lucky to have had him as a professor
Me watching this after GPT-4, Midjourney v5.1 and Bing A.i.
Same
Same
Michio has good points.
Society requires an individual to obtain a business degree in order to be in management,
therefore a business degree should for the betterment of society and the environment.
For us to be intelligent , focusing on ethical and moral practices
will not only positively contribute to society and the environment but also
there is a high potential for large profit long term. Type 1.
Dr. Michio Kaku is probably after Albert E. and Stephan Hawking the best physicsist of the last 100 years, his theory's make so much sense and his idea's are so revolutionary that for me personally he is the best physicsist.
this host said his dad takes a long time to walk across a room. His dad was watching this broadcast at home, so proud of his son with the checkered shirt, but now with a single tear trickling down his face.
the fact is that the greatest most untapped resource is human ingenuity. Our capasity to create whatever we can imagine, seems endless, and Mr. Kaku just doesn't seem to believe that. Pah... Some genius he is.
This guy is terrific - at once enlightning and entertaining...
i like michio. he knows a lot about physics and he is very smart but he also has a sense of humour and he knows how to talk to people.
@gradyiscool
thanks for taking the time to explain this. It all comes down to computers think faster than us but cannot problem solve better than us. It is much harder for a computer to work out a solution to a situation unless a human pre programs it to do so first. that is the point.
this guy always excites me about the future
I would love to study in his class this man is such a brilliant man.
michio kaku is incredible. the only trouble with artificial intelligence though is that we don't know that it doesn't require something more. we are only just now creating lifelike programs in computers, and it took billions of years to get intelligent life through evolution. plus, we are our only specimens for intelligence, we don't have anything to compare ourselves to to make generalizations. it is my opinion that free will and human thought cannot be described by current physics alone
I completely agree. Kaku is the main person, out of anyone, I'd want to be taught by. There are many, but Kaku over anyone.
That's the view I'm taking on the future of robotics and A.I., as an undergrad researcher in the field. Rodney Brooks of MIT, a very well known researcher in this field, also expresses a view similar to yours. The quote from him that I have is 'the distinction between us and robots is going to disappear', from 2002.
Computers are opening the doors of learning and exploration which we could never have dreamed of even 50 years ago. These discoveries will be used to better all of our lives. They go towards providing ample food, cures for diseases, more efficient uses of land and fuels, conservation, weather prediction and warning, increased manufacturing, entertainment, etc... etc...
Yes, there is also the potential for abuse and misuse - but the benefits of technology far outweigh the risks.
Michio Kaku (b. January 24, 1947) is an American theoretical physicist, specializing in string field theory, and a futurist. He is a popularizer of science, host of two radio programs, and a best-selling author.
I could listen to this guy for hours hes fastinating. If hes right or wrong about certain things i dont care hes still obviously a brilliant man.
I understand it quite well. Let me rephrase that last comment. Memory is the only part that significantly changes. It is important, because that is where your brain stores all of it's data that you seem to think come from the brain rewiring itself. Now, the brain does simplify itself; when you are young, you have far more synapses than when you are old because your brain simply removes ones that are useless or rarely used.
I couldn't believe my ears when Doctor Kaku admitted that computers could actually have enough conscience to do something like take over the world.
Questions: 1: what about quantum tunneling? What happens. When one of the atoms working in a quantum computer quantum tunnels? 2: how can quantum computers be stable when the vibrations from a car on a highway a long way away can mess it up? 3: for AI, could we not (in a long time of course) map a human brain synapse for synapse, and simulate it in a computer, while the computer calculates using the simulation. It is incredibly inefficient but we would still have super fast CPUs for # crunching.
Not to speak bad of an excellent public physicist. Moore's Law is only a section of the greater scale. Yes, in about 20 or so years the Silicon based computer will be at it's utmost limitation, but we've always managed to replace it in the past from vacuums to magnetic disk to Flash drives. We will without a doubt do the same again.
Moores Law will never collapse!
we are far greater within ourselves than we will ever know and so no machine or pc could ever match the human mind shape and form...
accepting who we are and where we are placed first are the steps we have to step first....
Moor's Law is "# of transistors on an integrated circuit for minimum component cost doubles every 24 months", meaning that in 2 years you can put 2X as many transistors on the same area of silicon. When you shrink the transistor you can get higher clock speeds, but look at the new Core 2 vs Pentium D. All tests show that a 3.0GHZ Core 2 is 60% more powerful than a 3.0 GHZ Pentium using the same kind of the transistors. How we wire a chip is more important than how small a transistor is.
@KladionicaCity just to know that we may see something like this one day makes me more excited than anything
Hey no problem. Glad I could help.
I like the way Michio explains science and technology.
Let me make this a bit clearer. A computer program typically does not change itself; a learning program will simply take advantage of the fact that it can store data in files on the hard drive to "remember" how to do something. Now there is one main part of the brain that studies show does change and grow; the hippocampus, the part of the brain believed to manage short term and long term memories. The brain is simply a circuit that adapts without changing it's circuitry outside of it's memory.
I didn't say it completely changes it's circuitry, but it's ability to change its own circuitry is what makes it functional. What you call its "specific function" is an orchestra of functions working together to simultaneously think, remember and act among many other things. "Hardwired" implies a specific purpose. The brain has no purpose, only use. It's purpose becomes whatever it does which is never a specific thing.
They're also working on 32-nanometer chips that should be on the market in 2009. Moore's Law has more steam left in it than most people realize.
Also, IBM's RoadRunner supercomputer has broken the 1 petaflop barrier. Most estimates put human brain capacity around 20 Petaflops. It took 20 years to go from one MFlop to a GFlop. 13 years to grom GFlop to TFlop. 10 years to go from TFlop to PFlop. We could see ExaFlops speeds (the far end of the proposed brain power spectrum) by 2017 or earlier.
its great to listen michio kako,is a quiet a personality.
M. Kaku is the next Carl Sagan ! Awesome mam!
@cmxsevenfoldxmc It was pretty good. He's very knowledgable of his craft, of course. My only complaint was that the class was over 100 students in a gigantic lecture hall, so there was no interpersonal connections. Otherwise, absolutely awesome. He's considered one of the best professors on campus. He always ends class with videos...usually with himself in them. LOL.
I just feel exited to see the new generation of computer. I wish I could see one machine that is capable of have thoughts.
1. If an organism evolves and stops having a specific feature or something else controlled by it's DNA, it doesn't mean it no longer has the DNA for it, it's just never expressed anymore. That's what most of our DNA is. Leftover data. That's what I meant.
2. The brain takes a very long time to change itself, if it does at all. It is most likely hardwired (which computers are not) but is just good at adapting without changing it's circuitry. In fact, I'm trying to train a neural net to do this.
I cant wait 20 years, my machine is geting better from day to day
And all those Art Bell interviews, which is where I first heard him.
He really is right about that. A butterfly is much smarter than an Asimo and they do it with just about 200,000 neurons, whereas Asimo uses billions of transistors.
Biological brains are quite ingeniously designed, elegant and efficient.
He also believes that every possible alternative-history exists right now, in the same space we're in but in a different dimension.
He's actually said there are dinosaurs in our houses right now (and everything else), but we can't see them because they're in a different dimension. Wrap your heads around THAT, people.
He's a professor of "Theoretical Physics". He really knows a lot of what he speaks about. If it's theory, he'll say it's theory. Besides, we're not honestly too far from "Star Trek" stuff. I don't watch it personally, but I follow Physics and Quantum Mechanics. And a lot of professors in the field will agree with everything he said.
I've done my research.
1. Very little of the genome is actually used. Literally-about 97% of it is never used, at least not for making proteins.
2. I did do some additional research since I posted that comment. Almost none of the brain does processing. Most of it (about everything more than a few millimeters deep) is like wiring; it only sends signals from one part to another and does no processing. Still, this means that a computer version of a brain can be reduced a lot.
He has an awesome TV set.
you guys are hella optimisitic, there is only a 30% chance we'll make it past the year 2025
now i dont know about that, that's like doing a great job at work and then your boss takes the credit for it. the scientist who proves it -or his team if that's the case- should make the announcement. but Dr. Kaku of course deserves an enormous amount of respect and credit for being such a pioneer in his field. he's a brilliant man.
I understand it quite well. You just have no idea what I'm talking about. I've repeated myself three or four times at least. The brain is hardwired to run a specific task. It takes information from it's memory and the body's sensory organs to either change it's memory to cause the system to operate differently (because different inputs mean different outputs, regardless of the function), or creates a response to the external stimuli. It doesn't completely change it's circuitry.
This Michio Kaku is awesome!!
the evolution of intelligent is from SIMPLE to COMPLEX and back to SIMPLE again. I got this from my own original thinking but after watching this, when Mr. Kakua said Quantum cycle, it reinforce my thinking. BTW watch "The Fantastic Planet"
My question was somewhat rhetorical (as it was in response to another person's comment), but thanks for expanding on that.
Hey it the Screensavers! I miss those tech tv g4 days!
Leo Laporte sounds so different here than he does on radio. The wonders of the studio microphone!
9 more years :P This was filmed in 2001, as the show ended in 2005, and that "quantum calculation" Michio mentions was performed in 2001.
This was filmed in 2001, as the show ended in 2005.
Your one of my favorite to watch on science channel, i just asked myself WHY...and i see ET or EDWARD TELLER...thats when i knew
one word, GENIUS!!
yes it is, this is the screen savers, the show he did before Call for help
I kinda love when people talk about the future in the sense of the next 10-70 years because I'll probably be alive for most of that :D
He is not saying that Moore's Law will end. He is saying that silicon has a physical limit so information processing must switch to a new method, perhaps quantum computing, perhaps DNA, but it can't be Silcon because there is a physical limit and because of Moore's Law we can predict when we will hit that limit.
@Casanuda
I'm not entirely sure that I understand your point. For one, I was specifically -not- talking about "borg". My concept was more of an "internet in your head", where you could -choose- to collaborate with others on an idea, or communicate with others at the speed of thought. -Not- a "hive mind" consciousness where there are no individuals. I'm looking to better humanity through technology, rather than destroy it.
Roger Vogelsang created a self thinking interface device based on quantum entanglement where freed electrons in pairs, left brain right brain?, triggered a random character program. In theory because freed electrons are mysteriously connected while defying our slow mass time he was able to get the device to communicate digitally, knowing cause before effect. Possibly the closest steps towards artificial intelligence ever. AI is not in the programming but in the way the hardware is configured.
Mr Kaku has the greatest job in the universe.
the one proof that a video is really good is either a million views, or so much discussion about it
You're not thinking it would be a good thing, or you're fearing it, this is why the transition of it is so dangerous, there is no way humans could live forever on earth
This guy really is the most valuable human on earth
I miss the screen savers, and i really like michio kaku i wish i could meet him lol
Well the guy comented on that in the interview about how he thought it wasn't just about speed.
I think it has alot more to do with how our brains are self productive; in that we build neurostructures as the demand increases. Efficiency is far more effective than bottlenecking all the information through one or two processors.
He made up String Theory, although people are starting to think his theory isn't true anymore. He's still a really, really smart guy though, and he knows a LOT about physics and space. He looks at logic and physics, and then pieces them together to predict the future.
It is partially true. I agree on the doubling number of transistors and partially on the inefficiency (internal traces etc.), but there are other things in development. I just happen to know that, because I work in microelectronics field and we design some of the dies. Of course, there are people who know more than you and me combined times 100.
Cheers!
This is probably the smartest person on the planet. The cool thing about him is that he also writes about philosophical theories, and he accepts that physics does at some point cross over into metaphysics.
I appreciate your concern, I really do. While I do believe in the existence of a soul, or something like the soul - understand I cannot make any definitive statements about what it is or how it operates and what effect it has on me. Nor can I factor this belief into my views on the world around me simply because it is metaphysical while I occupy a physical world.
In any case, I do not consider myself an automaton as I have (as well as I can perceive) free will and a cognitive consciousness.
The thing I think Mr. Kaku is not understanding about Moore's Law, is that it seems to apply to technology in all fields, not just in terms of silicon computers. Sure, Moore's law was anchored to transistors, how small they are, and how much they cost, and how powerful they are, but it seems we keep coming up with things that sustain Moore's Law. I'm sure this video was filmed before 3D chips were being developed.
yeah true which is why i can't wait to see when we can combine the two. which is totally possible
"I'd say 50 to 100 years is more accurate."
JOConnor313, I am quite studied in the areas surrounding Artificial Intelligence. MIT is not at all the only place that research is being done. In fact, most of the work at MIT (which I have personally seen first-hand) is focused on developing practical applications, not on theoretical research. There has been a huge amount of progress in theoretical research in the past few years.
You can't know it's impossible. We can try, and we will either succeed or fail. It would be silly to assume and miss out on an incredible opportunity.
he teaches at my university, and he is indeed a really smart guy.
of course there is always the opinion that there is no such thing as free will, but we are already deep into philosophy by that point, which is by definition pure conjecture
you have a great point there.
Instead of using silicon computers will be based on diamond-based computer chips. Because diamonds can conduct electricity and the processing power will go beyond what we can imagine today. Since overheating won't be a problem.
You should try it. You will probably be famous, too.
Kudos to you, friend.
he is my profesor, he is very inteligent and I love his class ...... he has talent!
this is one of the only guys i can understand everything he says on such high intellectual topics. but then again its because he speaks in theory and not technicality. i need to go back to school :(
Yeah... about artificial intelligence I think understanding how intelligence works more exactly is what we need to know right now. The processing power that exists already is just waiting for theories to come up.
focus is on living longer so we can accumulate wisdom better
@frostheat246 Problem of A.I isn't hardware. You can throw any amount of qbits (quantum computing bits) on a given problem but if you can't define the problem, you can't solve it. Most people miss this. We simply don't know enough about the brain to model it. We can outperform it on most areas concerning speed and accuracy - that is not the problem. The problem is that we can't properly define intelligence. The understanding isn't there yet, and therefore, the software isn't there yet.
"The brain is simply a circuit that adapts without changing it's circuitry outside of it's memory."
The brain depends on memory for function therefore memory is as much a part of the brain as any of these genetic circuits you worship. These integrated circuits make information more readily interpretable by the memory. Have you ever somebody alive with absolutely no memory? No, because brains don't work without. These little circuits (that all work together) merely ease processing for the memory.
agreed.. i love creative thinking.
The Theta waves of the universe represents the infinite intelligence of all that there is or that may be. It mannifests itself as arrays of frequencies that are in essence conduits of families of frequencies that makeup communities of families of frequencies which weave the end product. We are receptors of that intelligence that's funneled through our beings as we are a part of it all.
Michio is the man.
A.I. was a huge hit!!
The film won five Saturn Awards, including Best Science Fiction Film. It was nominated for Academy Awards for Best Effects, Visual Effects and Best Music, Original Score!
it had a budget of 100 million,
BUT HAD A GLOBAL GROSS PROFIT OF $235,926,55 ranking 16 world wide... not a hit... that movie was so sad.
his book "hyperspace" is just awsome
All I can say is the Idea of Artificial Intelligence is a good idea but we must be careful to not make something like Skynet from the Terminator movies. That Idea for the chip is a good one and hopefully engineers will keep it in mind.
The thing is technology HAS to eventually reach the complexity of the human mind since it's an expoch above it in evolution. The mind doesn't have any shape or form, it's just a complex interaction between chemicals that results a very strong form of information.
Interesting conversation.
They say that computers will never truely understand. One thing wrong with that. In order for a computer to function, it has to understand in the first place. The problem with it all is people are trying to write it in one code/one program. It needs to be written in seperate codes/programs that intercommunicate with one another.
@NikeGolf118 Yes, but the exponential growth of computing power did not start with Moore's law. It (Moore's law) is only the latest paradigm of computing. Before that, it was transistors, before that, vacuum tubes, and before that mechanical computing. All grew exponentially in power. The Next paradigm could be quantum computers (as Michio Kaku states) or 3 dimensional nano tech based substrates. Either way, the ultimate computing potential will not be reached for quite a long time.
@buneter Moore's law which states that about every 18 months, the computing power (processor) will double