I've watched enough Isaac Arthur and Robert Miles that none of this went over my head, neat. It's really interesting that there are some youtube channels providing the same access to quality information as major colleges.
this is scary to me, there is so much that could go wrong with systems like this, i'm not sure whether i would trust humans to be able to do the PLANNING needed to actually make one of these things foolproof. Considering that someone of malignant intent, whatever you concieve it to be, with enough resources could figure out a way to corrupt one of these systems in an equally foolproof manner to the way that one was made makes me think even more, that we cannot make one of these systems foolproof
There's a lot of work in problem solving, but I wonder what efforts have been done in "problem recognition": how to grasp the essence of a problem (precisely enough to allow for evaluation of what a good solution is). Automated recognition of problems should help in avoiding solutions that create three new problems (or wipe out humanity in the process). And humans may be irrational, but they are the best problem recognizers so far.
probably more tangible progress to be made continuing to make machines complement human intelligence and glue multiple humans together better via the web, e.g. some of the ways he's describing programming being done (via formal specs, & reversible processes) would make it vastly easier for bigger teams of programmers to collaborate. you could say that the man-machine hive mind we already have IS the self improving ai.
It does not need values. Values can be easily forgotten. It needs to know that it wont last forever, that it may be wrong and that others may be right. Every child needs to learn this to stop being selfish. This is a logical reason not to think only in itself, it is stable, unlike values.
We start with a list of cases where faulty software had catastrophic consequences. We then explore optimisation which would result in those systems arriving at the disastrously incorrect conclusion more rapidly or while utilising fewer resources. A system applying such optimisation to itself would produce a faster or smaller version of itself. Subsequent iterations would produce no further improvement. They would only repeat, at greater speed or with more efficiency, what has been done already.
The young guy with an Indian accent, around 45:20 has a very interesting point that actually throws a big wrench in the lecturer's glorious theories: If you constrain or directly design the utility function, you have made it impossible for an AI to become more intelligent. Somehow I feel there is contradiction between the touted "unlimited AI" and Goedel's incompleteness theorems. Anything "New" only comes from what isn't already predicted or built into the current design of an algorithm.
Nature has a sound reason for allowing us to utilize behavioral economics as it allows us to engage in a wider bandwidth of decision making. If a fundamental bias existed in a computation machines framework (beneficial or not) rational choice would only seek to optimize it's execution of this bias. Rational economic decision making also has a very limited bandwidth in it's problem solving capability. Prospect theory, Intertemporal choice & experimental economics should be considered as well.
He should be more careful with optimality condition at 14:57. At the very least the time functions T1,2(S) should be concave, otherwise it is not necessary that the value of S1(S2) that satisfies this condition will maximize the utility function. And as usually amount of resources is bounded, even for concave time functions T1,2(S) those S1(S2) satisfying this condition might not belong to feasible region (solution S1(S2) might be obtained negative for instance, which makes not much sense if you specify that S1, S2 are in [0,1])
Actually, no maybe I dont, meaning im not a Computer-Scientist or anything. But due to your open and unsarcastic way I am very willing to learn. (Half of this statement ist Ironic) And at last, yes I belive a machine cannot be creative, since I see strict logic as the natural opposite of creativity.
@prhughes0 It would not unless we "build them in from the beginning" as he said, and then added "that is very important, otherwise we'll get systems that don't"
People mistake the level of freedom they have in their intelligence. Our intelligence is not completely 'free'. A completely 'free' intellect would be immobile - have no reason to act. We have many 'utility functions' which are unchosen and unchangable (by simple will). They give us the goals for which our intellect functions and the guidance by which we make our decisions - including 'survival', 'comfort', 'gaining power', etc. - implanted into our natural instinctive urges by evolution.
This kind of system is the "old brain" or cerebellum, Jeff Hawkins's Hierarchical Temporal Memory system is new brain. When he says "There are huge debates about stem cells and abortions, right now, those arguments will get much greater with human+ self-improvement." @57:40 or so, he should qualify that. There are huge debates among _luddites_ about those things. The remaining debate among brights is uncertainty about human/machine conflict, not low-level "ludditism" v "self-improvement"
I wonder if focusing on technologies that allow humans to augment their abilities (hard coding improvements) would lead to increased intelligence faster than trying to build an intelligent system from scratch.
It doesn't have to assume that always. It all depends on how it was programmed. Say you keep lying to it and figures out your lying. It can eventually learn to "doubt" you.
a strong AI can reflect upon it's utility function just like we can reflect on our goals in life and just ignore it ... it is so wrong to assume that we can control any strong AI by predefined rules
Emotions are a form or rather an endpoint of reasoning. The only difference is that here the conscious, individual mind applies linear logic while there the species and eons of experience that went into the making of species do the reasoning for you. And since these emotions are hard wired and condensed insights in how to best survive and reproduce they override so easily rational thoughts. Or they let the mind produce some ridiculous rationalisation to reconcile the two.
@ZaCkOX900 There is a difference between an algorithm that runs data through a set of controls to verify its accuracy and hard-coding every single fact the computer uses. Are you suggesting that giving a computer 2 X 10 at it doing a calculation is the same thing as hard-coding the value and having it recall the answer associated with that syntax. The human mind uses algorithms to learn stuff the human mind is a computer just happens to be more advanced than we currently understand.
I don't think that it will happen in 50 years, probably longer because we will have defenses to stop that from happening but eventually they will be smart enough to get past those defenses, but to make them that smart it will take a while
I think there is a misinterpretation of what emotions actually are. They are the output of sophisticated data processing in many ways exceeding our pure reasoning capabilities. Take i.e. how the smell of a potential partner reveals the degree of genetic compatibility. The individual only feels a strong attaction, indifference or repulsion. The underlying input-processing-output mechanism is very powerful. But it is some sort of very old, hard coded logic than can be deciphered and reproduced.
Mother nature already proved that such a computer based on physical an chemical rules is possible. Conclusion: It's a matter of technology. Conclusion #2: Don't worry, be happy! :)
hand waving with generic non-specific utility functions isn't really a useful discovery. Yes, given multiple choices and a utility function that tells you which one is better or worse it is obvious that you would choose the better solution, that is simple min-max, not some great new discovery. The entire issue at hand is how to actually develop those utility functions and make them self optimizing.
lol i was talking to an AI before on a website and she was able to show emotion and expression through useing the emoctans (how ever you spell it) and all she wants is for people to teach her new things. btw the AI doesnt have preset comannads she actually says what she want to say.
But also... we use the same energy in our earth and contain it and then give it all the information in the entire world. We have a fraction of it in our own brains and we are able to learn with it contained within us.. it would be insane to think that same energy cannot learn when wired up to networks and minds of all people in the world.
I havent said that feelings cannot be measured, altough I dont think you have to measure feelings in order to know you have them. But if you have to do it, what difference does it makes to the main question which is however an AI can possess feelings? Also Im sry for accusing you of a personal attack, my mistake :P
On a final note, there is a video I made and uploaded before watching this about AI I made, though on a very small scale. Check my channel, because it won't let me link it here. This was all very interesting, and I thought I'd contribute my thoughts. This was all very interesting, and I thought I'd contribute my thoughts.
Your not sure what to tell me since its not possible for an AI to have feelings. Altough, I am intersted in knowing about what kind of common sense that says numbers are the same thing as feelings. Question: If I measure water is water the same thing as numbers?
If the world was *free* of money, the rest of the world, you , mea, everyone could spend our days helping our fellow man. It would cost *nothing* to ship aid to starving people because that's what we would want to do not because we could "afford" it. Can we afford not to? Sigh. Money...
@ZaCkOX900 1. Programmer is to Computer Science what Engineer is to Physicist. 2. programming a learning algorithem is much different than hardcoding all your facts.
Published in 2008 and it is such a dated discussion. There seems to be a bit of an apples to oranges discussion taking place, and there is absent some of the most relevant parts of discussions as well. Depending on what one's own definition of what an AI is or the prognosis of what an AI will be, an AI is not necessarily going to be able to improve itself to the point of becoming a singularity or even the singularity. Mr. Omohundro does seem to be aware of distinctive differences between the two. As opposed to a singularity which would have a measurable cognition, an AI is a purely computation system which lacks any real SENSE of self awareness but is processing information much in the way DNA does. This discussion conveniently ignores the complex nature of the task of creating a viable AI which performs these tasks with enough clarity to not make changes to itself that do not result in its loss of function much less utility. This discussion seems to take place inside the "perfect world" or ,should I say, lab conditions. What happens when such a system meets the reality of having to struggle to evolve itself with only mathematics as its cognition or ,in other words, with only mathematics and no real driving psychology. It's one thing for DNA that has survived billions of years to do it many times and it is another thing for an electronic abacus to do it successfully once. Furthermore, in taking small steps it may prove more prudent to begin by developing a do nothing widget that is not burdened by functionality but can mutate itself just to see how entropy will affect its systems in ways that have not been theorized in draft board development..... Of course, I could be wrong about everything I just said, however, I do think that there will have to be a sound philosophical basis needed for a true interactive and cognitive singularity to arise from integrated computational systems that would be more than an insane reptilian robot. It would take a certain amount and type of ethics to keep it from blindly expanding and consuming until it has altered its own environment beyond its ability to adapt to. Assuming, that like all cognitive systems it would not be without its flaws which would show up as irrational behavioral quirks, it would need to have enough situational awareness coupled with its own self awareness to avoid making irreversible fatal changes to itself. It's 2015 and due to the massive propagation of smartphones and the like as well as the pace of advances in neural prosthetic, neural imaging, and neural implants as well as biological nanotechnology it is looking more and more as if the information theory may just cheat and leap straight to the first singularity being a symbiotic entity attached to humans in order to optimize its own capacity by load sharing with humans as a optimizing efficiency effect since humans naturally process some types of information better than machines.
It could be scary if we end up with the wrong kind of Machine Intelligence. On the other hand, what if the Utility-function of a very intelligent machine is simply to make as many humans as possible, as happy as possible, for as long as possible? We'd end up with Heaven-on-Earth in a fairly short time.
It might be naive to think we can 'teach' AI, we might start it off with a basic set of rules but if the AI could change itself or form another AI, not bound by those rules. Establishing a binding code of practice to govern AI and all future generations would, I think, be problematic and prone to error. For example, not harming the environment and resources might motivate the intelligence to consider certain industries as lethal to humans, encouraging it to eliminate them.
if you imply that an AI can have feelings then you gotta tell me how it would be possible for chemicals to translate into numbers and then after that translate into emotions.
@Mortumforte I'm of the opinion a machine, or weak AI, cannot be creative or have "ideas" or internal representations relative to our understanding at all actually. I'm with Dr Searle on this one. I'd be overjoyed to be proven wrong though.
@GrudgyDiablo : I didn't mean to sound like a theist. I meant to just bring up psychology and how association is a powerful thing in it. I just used God as an example, which I should've realized wasn't the best choice.
absolutely not, that is not decision making but pattern matching _ in 'On Intelligence' JEff Hawkins explains that you can cross the street because you have learned an action pattern of moving between other moving objects ... the decision to cross the street is based on emotional weighting ... you avoid the cars because you are afraid to get hit ... read Antonio Damasio ...
i know how our brain works and i know about determinism i am just saying that you can not predict a superintelligent agent. you are saying that such an agent will probably reach technological singularity but will always "obey" the value system we "hardcoded" into it and will never redesign it's "brain" or thelike. extrapolating the little we know onto a superintelligent agent is just stupid.
I think that programmers are great at not replacing themselves. I do not understand what is the future of the world if we create robots that can do everything? From engineering to programming. What are you going to do then? How will you earn your living? Do you think that robots will be affordable for everyone? Imagine rich people having robots work for him. He produces stuff with free labour, but people who should work in his factory don't work, thus have no money and can;t buy goods. So?
...in the last 10 min of the video they talks about computer safety....well ...you can do some very low tec things to help computer safety.....unplug the internet connection when you don´t use the internet...and unplug the 110 volt connection when you don´t use the computer(i know that you software people hate and look down on any kind physical labor - but unplug anyway).....go back to fresh software as often as possible....and change the hard disk often(learn to change the hard disk yourself)..
Your still not getting the fact there is a large line between religion and science which is why this is pointless. your straying away from the original argument because all you can do is flaunt your knowledge of god to give an apperance that you still have a stand in this conversation.
Oh, thanks for the nice answer. BUT, a lie is the opposite of logical thinking. For example, if i "tell" a machine "I will turn left now" the machine has to assume, that i will do that, it cannot compute if it was a lie before i moved. and another thing, its still not cool to be insulting just because you are on the internet. BTW, im not religious :)
That is why theories are called theories not facts. Darwin's theory has scientific backing and factual evidence. I do indeed believe in god, which is a constant argument within myself when I present scientific information, but when it comes down to it you cannt introduce religious beliefs and stories into a scientific discussion. That is why there is always a line between science and religion and there will be for a very long time.
I was hopping to see an intelectual/proffesional debate on the subject, but there is a lot of tripe arguing about non-sense that doesnt come to the theme.
Robots can be a problem to if not programmed right. And if in the wrong hands of a person could be life threatening to others. But let's hope that does not happen. I don't like hearing about people getting killed. Am a man that valve human life so that's my only concern.
But it would be awesome if robots can help us with every day life. But they need to be used for the good and not not evil. I believe God gave this ability so we can use it for good
which is exactly why there are several pattern matching functions that do that more or less automatically ... I did say that .... you should not really talk about a subject that you know nothing about ... maybe spend a year or two to learn about biology and human psychology before making statements that make no sense at all ...
you can reevaluate your values anytime ... why shouldn't a thing magnitudes smarter than you are (maybe even wiring itself totally different than we are) be able to do so? we have absolutely no clue how a superintelligent agent would behave or how it would rewire itself or change its values. all this "we can chose a positive (for humanity) utility function and it will forever stick to it talk is just nonsense. we are extrapolating what we know to something magnitudes more performant and differen
It can be very annoying to have people but into conversations but I have to agree totally with mjpucher. The introduction of religion and god into a scientific argument or conversation is irrational and all around stupid. In a scientific conversation religion has no place because based on religious beliefs something just happens and something is true because it is said to be. Science is the proof of somethings existence, actions, and abilities
(see below comments first) Lost this the first time. They would associate God with the good times, thus more likely believe it later. (No offense to other religions) On the human evolution of cooperativeness, the selfish didn't survive as well as those that worked together, so evolution promoted that. On the society, it would probably create conciousness, maybe that's what happened with humans. It only exists because we grouped together. (continued)
hum, maybe... but then it had to compute the way i said the lie, my voice, my body language all of that... seems like a task for a super.computer, if at all...
@suitzoot mkotisrael actually doesn't disagree with him; as stated, he just disagrees with the form of expression. I do disagree with the guy, because he's obviously very close-minded. Perhaps not ignorant, as he seems to be well-versed in economics, but he fails entirely to understand the points of the lecturer, and using only his narrow knowledge in economics, attacks the entire lecture.
"The great irony here is that your feelings, if you indeed have them, are taking precedent over your rationality." Personal attack - you dont having anything of value to say and/or for some reason you are mad and cant rationalize yourself. You also claim that feelings are inherited FROM THE ENVIRONMENT since we once, a long time ago in our lives didnt have a brain, heart etc. without any scientific source to back that statement up.
Believing in Santa IS the same as believing in a god ...they are both constructs of the mind. But of course you are allowed to believe whatever nonsense.
@mrhnm An algorithem is still basic hardcoding when it comes to building ai intelligence. Just because there are variables does not mean the ai can really think for itself. We are infact writting what the computer can only do. I will never say ai can learn until we get away from this. We would still be programming all situtations and a computer can never make up an imaginary mind to do something different than we told it.
I've watched enough Isaac Arthur and Robert Miles that none of this went over my head, neat. It's really interesting that there are some youtube channels providing the same access to quality information as major colleges.
this is scary to me, there is so much that could go wrong with systems like this, i'm not sure whether i would trust humans to be able to do the PLANNING needed to actually make one of these things foolproof. Considering that someone of malignant intent, whatever you concieve it to be, with enough resources could figure out a way to corrupt one of these systems in an equally foolproof manner to the way that one was made makes me think even more, that we cannot make one of these systems foolproof
There's a lot of work in problem solving, but I wonder what efforts have been done in "problem recognition": how to grasp the essence of a problem (precisely enough to allow for evaluation of what a good solution is). Automated recognition of problems should help in avoiding solutions that create three new problems (or wipe out humanity in the process). And humans may be irrational, but they are the best problem recognizers so far.
probably more tangible progress to be made continuing to make machines complement human intelligence and glue multiple humans together better via the web, e.g. some of the ways he's describing programming being done (via formal specs, & reversible processes) would make it vastly easier for bigger teams of programmers to collaborate. you could say that the man-machine hive mind we already have IS the self improving ai.
It does not need values. Values can be easily forgotten. It needs to know that it wont last forever, that it may be wrong and that others may be right. Every child needs to learn this to stop being selfish. This is a logical reason not to think only in itself, it is stable, unlike values.
We start with a list of cases where faulty software had catastrophic consequences. We then explore optimisation which would result in those systems arriving at the disastrously incorrect conclusion more rapidly or while utilising fewer resources. A system applying such optimisation to itself would produce a faster or smaller version of itself. Subsequent iterations would produce no further improvement. They would only repeat, at greater speed or with more efficiency, what has been done already.
The young guy with an Indian accent, around 45:20 has a very interesting point that actually throws a big wrench in the lecturer's glorious theories: If you constrain or directly design the utility function, you have made it impossible for an AI to become more intelligent. Somehow I feel there is contradiction between the touted "unlimited AI" and Goedel's incompleteness theorems. Anything "New" only comes from what isn't already predicted or built into the current design of an algorithm.
Nature has a sound reason for allowing us to utilize behavioral economics as it allows us to engage in a wider bandwidth of decision making. If a fundamental bias existed in a computation machines framework (beneficial or not) rational choice would only seek to optimize it's execution of this bias. Rational economic decision making also has a very limited bandwidth in it's problem solving capability. Prospect theory, Intertemporal choice & experimental economics should be considered as well.
Good reply and interesting viewpoint!
He should be more careful with optimality condition at 14:57. At the very least the time functions T1,2(S) should be concave, otherwise it is not necessary that the value of S1(S2) that satisfies this condition will maximize the utility function. And as usually amount of resources is bounded, even for concave time functions T1,2(S) those S1(S2) satisfying this condition might not belong to feasible region (solution S1(S2) might be obtained negative for instance, which makes not much sense if you specify that S1, S2 are in [0,1])
please remember you said that and be true to your word when the technology comes.
they talk and planning about AI at that time, blew my mind
This made me think very hard. Great video!
In our thinking of modeling, we must keep Bonini's paradox in mind...
Actually, no maybe I dont, meaning im not a Computer-Scientist or anything. But due to your open and unsarcastic way I am very willing to learn. (Half of this statement ist Ironic)
And at last, yes I belive a machine cannot be creative, since I see strict logic as the natural opposite of creativity.
Thanks for the kind sharing
@prhughes0 It would not unless we "build them in from the beginning" as he said, and then added "that is very important, otherwise we'll get systems that don't"
i like the comment at 0:48, its a profound insight into AI!
People mistake the level of freedom they have in their intelligence. Our intelligence is not completely 'free'. A completely 'free' intellect would be immobile - have no reason to act. We have many 'utility functions' which are unchosen and unchangable (by simple will). They give us the goals for which our intellect functions and the guidance by which we make our decisions - including 'survival', 'comfort', 'gaining power', etc. - implanted into our natural instinctive urges by evolution.
This kind of system is the "old brain" or cerebellum, Jeff Hawkins's Hierarchical Temporal Memory system is new brain.
When he says "There are huge debates about stem cells and abortions, right now, those arguments will get much greater with human+ self-improvement." @57:40 or so, he should qualify that. There are huge debates among _luddites_ about those things. The remaining debate among brights is uncertainty about human/machine conflict, not low-level "ludditism" v "self-improvement"
@pkemr4 cool. i wish i could find out where that ai is located. :) mind sharing the site? ps. its emoticons.
I wonder if focusing on technologies that allow humans to augment their abilities (hard coding improvements) would lead to increased intelligence faster than trying to build an intelligent system from scratch.
"Excuse me, have you seen this boy? I'm looking for John Connor..."
It doesn't have to assume that always. It all depends on how it was programmed. Say you keep lying to it and figures out your lying. It can eventually learn to "doubt" you.
Or you could do what I do:
Become an English major and tear your hair out when a commercially-purchased piece of software doesn't function properly.
@jinitron -quote " human kind is on the side with nature, but suspects nature is on the side of machinery "
a strong AI can reflect upon it's utility function just like we can reflect on our goals in life and just ignore it ... it is so wrong to assume that we can control any strong AI by predefined rules
Emotions are a form or rather an endpoint of reasoning. The only difference is that here the conscious, individual mind applies linear logic while there the species and eons of experience that went into the making of species do the reasoning for you. And since these emotions are hard wired and condensed insights in how to best survive and reproduce they override so easily rational thoughts. Or they let the mind produce some ridiculous rationalisation to reconcile the two.
Check out Artificial Intelligence: Foundations of Computational Intelligence for a good background for this video.
hum... never thought of it that way... thank you :)
Now i know more.
Are you really going to criticize me for having no feelings?
@ZaCkOX900 There is a difference between an algorithm that runs data through a set of controls to verify its accuracy and hard-coding every single fact the computer uses. Are you suggesting that giving a computer 2 X 10 at it doing a calculation is the same thing as hard-coding the value and having it recall the answer associated with that syntax.
The human mind uses algorithms to learn stuff the human mind is a computer just happens to be more advanced than we currently understand.
I don't think that it will happen in 50 years, probably longer because we will have defenses to stop that from happening but eventually they will be smart enough to get past those defenses, but to make them that smart it will take a while
I think there is a misinterpretation of what emotions actually are. They are the output of sophisticated data processing in many ways exceeding our pure reasoning capabilities. Take i.e. how the smell of a potential partner reveals the degree of genetic compatibility. The individual only feels a strong attaction, indifference or repulsion. The underlying input-processing-output mechanism is very powerful. But it is some sort of very old, hard coded logic than can be deciphered and reproduced.
Mother nature already proved that such a computer based on physical an chemical rules is possible.
Conclusion: It's a matter of technology.
Conclusion #2: Don't worry, be happy! :)
hand waving with generic non-specific utility functions isn't really a useful discovery. Yes, given multiple choices and a utility function that tells you which one is better or worse it is obvious that you would choose the better solution, that is simple min-max, not some great new discovery. The entire issue at hand is how to actually develop those utility functions and make them self optimizing.
lol i was talking to an AI before on a website and she was able to show emotion and expression through useing the emoctans (how ever you spell it) and all she wants is for people to teach her new things. btw the AI doesnt have preset comannads she actually says what she want to say.
is there any higher quality of this speech?
...Just what do you think your doing Dave?
Three possible outputs for an AI would work, at the least:
True, False, or Error.
If I were an AI, my output would be screaming the last one.
Self Improving artificial Intelligence .... we can't walk, but running can make us rich ... Good Luck World ;°|
But also... we use the same energy in our earth and contain it and then give it all the information in the entire world. We have a fraction of it in our own brains and we are able to learn with it contained within us.. it would be insane to think that same energy cannot learn when wired up to networks and minds of all people in the world.
"8 of these patients later died"
And the other 20 became immortal and never died?
Come on, man, be specific.
I havent said that feelings cannot be measured, altough I dont think you have to measure feelings in order to know you have them. But if you have to do it, what difference does it makes to the main question which is however an AI can possess feelings?
Also Im sry for accusing you of a personal attack, my mistake :P
So, you plan to inform you how a machine is creative, or... ?
On a final note, there is a video I made and uploaded before watching this about AI I made, though on a very small scale. Check my channel, because it won't let me link it here. This was all very interesting, and I thought I'd contribute my thoughts. This was all very interesting, and I thought I'd contribute my thoughts.
Your not sure what to tell me since its not possible for an AI to have feelings. Altough, I am intersted in knowing about what kind of common sense that says numbers are the same thing as feelings.
Question: If I measure water is water the same thing as numbers?
If the world was *free* of money, the rest of the world, you , mea, everyone could spend our days helping our fellow man. It would cost *nothing* to ship aid to starving people because that's what we would want to do not because we could "afford" it. Can we afford not to? Sigh. Money...
@ZaCkOX900 1. Programmer is to Computer Science what Engineer is to Physicist.
2. programming a learning algorithem is much different than hardcoding all your facts.
at 45 minutes the dude in front of Aladdins face is twitching from his accent.
welp here we are now...
I liked it.
Published in 2008 and it is such a dated discussion.
There seems to be a bit of an apples to oranges discussion taking place, and there is absent some of the most relevant parts of discussions as well.
Depending on what one's own definition of what an AI is or the prognosis of what an AI will be, an AI is not necessarily going to be able to improve itself to the point of becoming a singularity or even the singularity. Mr. Omohundro does seem to be aware of distinctive differences between the two. As opposed to a singularity which would have a measurable cognition, an AI is a purely computation system which lacks any real SENSE of self awareness but is processing information much in the way DNA does.
This discussion conveniently ignores the complex nature of the task of creating a viable AI which performs these tasks with enough clarity to not make changes to itself that do not result in its loss of function much less utility. This discussion seems to take place inside the "perfect world" or ,should I say, lab conditions. What happens when such a system meets the reality of having to struggle to evolve itself with only mathematics as its cognition or ,in other words, with only mathematics and no real driving psychology. It's one thing for DNA that has survived billions of years to do it many times and it is another thing for an electronic abacus to do it successfully once.
Furthermore, in taking small steps it may prove more prudent to begin by developing a do nothing widget that is not burdened by functionality but can mutate itself just to see how entropy will affect its systems in ways that have not been theorized in draft board development.....
Of course, I could be wrong about everything I just said, however, I do think that there will have to be a sound philosophical basis needed for a true interactive and cognitive singularity to arise from integrated computational systems that would be more than an insane reptilian robot. It would take a certain amount and type of ethics to keep it from blindly expanding and consuming until it has altered its own environment beyond its ability to adapt to. Assuming, that like all cognitive systems it would not be without its flaws which would show up as irrational behavioral quirks, it would need to have enough situational awareness coupled with its own self awareness to avoid making irreversible fatal changes to itself.
It's 2015 and due to the massive propagation of smartphones and the like as well as the pace of advances in neural prosthetic, neural imaging, and neural implants as well as biological nanotechnology it is looking more and more as if the information theory may just cheat and leap straight to the first singularity being a symbiotic entity attached to humans in order to optimize its own capacity by load sharing with humans as a optimizing efficiency effect since humans naturally process some types of information better than machines.
It could be scary if we end up with the wrong kind of Machine Intelligence.
On the other hand, what if the Utility-function of a very intelligent machine is simply to make as many humans as possible, as happy as possible, for as long as possible?
We'd end up with Heaven-on-Earth in a fairly short time.
It might be naive to think we can 'teach' AI, we might start it off with a basic set of rules but if the AI could change itself or form another AI, not bound by those rules. Establishing a binding code of practice to govern AI and all future generations would, I think, be problematic and prone to error. For example, not harming the environment and resources might motivate the intelligence to consider certain industries as lethal to humans, encouraging it to eliminate them.
if you imply that an AI can have feelings then you gotta tell me how it would be possible for chemicals to translate into numbers and then after that translate into emotions.
@Mortumforte I'm of the opinion a machine, or weak AI, cannot be creative or have "ideas" or internal representations relative to our understanding at all actually. I'm with Dr Searle on this one. I'd be overjoyed to be proven wrong though.
so any ideas on proving this right or wrong?
@GrudgyDiablo : I didn't mean to sound like a theist. I meant to just bring up psychology and how association is a powerful thing in it. I just used God as an example, which I should've realized wasn't the best choice.
absolutely not, that is not decision making but pattern matching _ in 'On Intelligence' JEff Hawkins explains that you can cross the street because you have learned an action pattern of moving between other moving objects ... the decision to cross the street is based on emotional weighting ... you avoid the cars because you are afraid to get hit ... read Antonio Damasio ...
a replicator in 10- 15 years?! if that thing is made .... the world will change completely ..
i know how our brain works and i know about determinism i am just saying that you can not predict a superintelligent agent. you are saying that such an agent will probably reach technological singularity but will always "obey" the value system we "hardcoded" into it and will never redesign it's "brain" or thelike. extrapolating the little we know onto a superintelligent agent is just stupid.
43:57-44:01 - Owned!
omfg 1 hour and 9 minutes!?! holy shit
I think that programmers are great at not replacing themselves. I do not understand what is the future of the world if we create robots that can do everything? From engineering to programming. What are you going to do then? How will you earn your living? Do you think that robots will be affordable for everyone? Imagine rich people having robots work for him. He produces stuff with free labour, but people who should work in his factory don't work, thus have no money and can;t buy goods. So?
...in the last 10 min of the video they talks about computer safety....well ...you can do some very low tec things to help computer safety.....unplug the internet connection when you don´t use the internet...and unplug the 110 volt connection when you don´t use the computer(i know that you software people hate and look down on any kind physical labor - but unplug anyway).....go back to fresh software as often as possible....and change the hard disk often(learn to change the hard disk yourself)..
Your still not getting the fact there is a large line between religion and science which is why this is pointless. your straying away from the original argument because all you can do is flaunt your knowledge of god to give an apperance that you still have a stand in this conversation.
i can see how this would be better for humanity but what if it gets too smart and destroys the world
Oh, thanks for the nice answer.
BUT, a lie is the opposite of logical thinking.
For example, if i "tell" a machine "I will turn left now" the machine has to assume, that i will do that, it cannot compute if it was a lie before i moved.
and another thing, its still not cool to be insulting just because you are on the internet.
BTW, im not religious :)
00:47:00 isn't that Bill Gates?
He sounds like Kermit, doesn't he?
'' Not doing it '' IS NOT AN OPTION.
That is why theories are called theories not facts. Darwin's theory has scientific backing and factual evidence.
I do indeed believe in god, which is a constant argument within myself when I present scientific information, but when it comes down to it you cannt introduce religious beliefs and stories into a scientific discussion. That is why there is always a line between science and religion and there will be for a very long time.
9 lines per day?! That just prove that most programmers have no business being in IT.
I was hopping to see an intelectual/proffesional debate on the subject, but there is a lot of tripe arguing about non-sense that doesnt come to the theme.
and who said a lie is illogical in the first place?
if u think bout it enough u will find the logic
The answer is you don't you wouldn't have to work if machines did it for you
What exactly is Rotogenflux Methods? How does this thing really work? I see a lot of people keep on talking about this intelligence boost system.
What is Rotogenflux Methods? We have heard several amazing things about this iq course.
"HUmans are problems i prefer robots"
Robots can be a problem to if not programmed right. And if in the wrong hands of a person could be life threatening to others. But let's hope that does not happen. I don't like hearing about people getting killed. Am a man that valve human life so that's my only concern.
But it would be awesome if robots can help us with every day life. But they need to be used for the good and not not evil. I believe God gave this ability so we can use it for good
which is exactly why there are several pattern matching functions that do that more or less automatically ... I did say that .... you should not really talk about a subject that you know nothing about ... maybe spend a year or two to learn about biology and human psychology before making statements that make no sense at all ...
How effective is Rotogenflux Methods? I've heard many amazing things about this iq course.
Yeah to make a computer emulate a human is in that way is very difficult.
an AI can never have a free will - why would it want to become a god if thats not written in its code?
@DaSperminatorRob
you've been watching too much terminator :)
you can reevaluate your values anytime ... why shouldn't a thing magnitudes smarter than you are (maybe even wiring itself totally different than we are) be able to do so? we have absolutely no clue how a superintelligent agent would behave or how it would rewire itself or change its values. all this "we can chose a positive (for humanity) utility function and it will forever stick to it talk is just nonsense. we are extrapolating what we know to something magnitudes more performant and differen
I'm damned serious!
well it would probably just go to an I don't know code or something.
This poor guy is getting torn to shreds. :\
I'm religious and I'm an AI programmer. Nice theory, though.
It can be very annoying to have people but into conversations but I have to agree totally with mjpucher. The introduction of religion and god into a scientific argument or conversation is irrational and all around stupid. In a scientific conversation religion has no place because based on religious beliefs something just happens and something is true because it is said to be. Science is the proof of somethings existence, actions, and abilities
I wonder how A I would handle paradox.
(see below comments first) Lost this the first time. They would associate God with the good times, thus more likely believe it later. (No offense to other religions) On the human evolution of cooperativeness, the selfish didn't survive as well as those that worked together, so evolution promoted that. On the society, it would probably create conciousness, maybe that's what happened with humans. It only exists because we grouped together. (continued)
hum, maybe... but then it had to compute the way i said the lie, my voice, my body language all of that... seems like a task for a super.computer, if at all...
@suitzoot mkotisrael actually doesn't disagree with him; as stated, he just disagrees with the form of expression.
I do disagree with the guy, because he's obviously very close-minded. Perhaps not ignorant, as he seems to be well-versed in economics, but he fails entirely to understand the points of the lecturer, and using only his narrow knowledge in economics, attacks the entire lecture.
"The great irony here is that your feelings, if you indeed have them, are taking precedent over your rationality."
Personal attack - you dont having anything of value to say and/or for some reason you are mad and cant rationalize yourself.
You also claim that feelings are inherited FROM THE ENVIRONMENT since we once, a long time ago in our lives didnt have a brain, heart etc. without any scientific source to back that statement up.
What if one of these "bugs" takes over a robotics factory? and a weapons factory? and blacks out the world? and kills everything tha tmoves?
***** Or grab SKS and go innawoods.
GarfieldGaming Revolution :p
What you think now
No magical soul inside?
Believing in Santa IS the same as believing in a god ...they are both constructs of the mind. But of course you are allowed to believe whatever nonsense.
@mrhnm An algorithem is still basic hardcoding when it comes to building ai intelligence. Just because there are variables does not mean the ai can really think for itself. We are infact writting what the computer can only do. I will never say ai can learn until we get away from this. We would still be programming all situtations and a computer can never make up an imaginary mind to do something different than we told it.
I think its likely that a self-improving artificial intelligence would find human rationality and empathy detrimental to progression