You were explaining before that there is a visible dip after the P100 for faces. So machine learning is probably not needed for spotting that. I think it is a mistake to average out all sensors. A lot of localisation information is lost. But I see great potential in applying machine learnining here. There is amazing progress in deep neural networks learning to remove noise on images for example. Looking forward.
@@paulsawford7827 I mean in the sense of the algorithm being able theoretically to localize different areas in the brain. Recognizing a threedimensional pattern in the brain if you will. I think this is only possible if you provide the information of each individual sensor. If you make an average out of all a dimension is lost maybe. Just a thought.
1. add more test subjects 2. add more difficulty images, eg. man faces vs woman faces (no house, no scenery) 3. focus on occipital lobe, subject can only use one of his/her dominant eyes 4. add splitting image (eg. solid orange colour) for 1-2 secs after each face pictures
Rae Seaweed .000123% of the population will set the standard for how we all live that is insanity. And how can you verify what’s being discovered in the brains of these handicap people so when the government comes out and tells you these 50 people here who are in a vegetative state in these beds have asked to be euthanized you’ll say no problem how scary is that
Why? This is a closed loop training system. The neural networks learns what pattern fits to what image. It knows the image in the training stage. After that it has created a model. It's not something magical or something. Nor it can help any handicapped person.
As the mother of a child who is intelligent but non-verbal, I think I would give my last breath to have her ACTUAL thoughts come out onto a computer screen
You tried to decode in an afternoon what her brain coded for itself through trial and error for her entire life. Of course it didnt work. Also, there should be constant noise activity in ones brain even while idle. Showing the same face twice may also cause different reactions and brain patterns, which may mean the only signal you were able to pick up was from a lower function facial recognition response.
Not necessarily. If the recognition algorithm is finely tuned and especially if machine learning is used the influence of some drugs could completely ruin the measurements. AFAIK, a brain has radical drops of alpha waves and heightened levels of some other types on DMT.
Cannabis is a muscle relaxer so in theroy it will likely clarify information but on the flip side it may show chaotic graphing in some people. I've had EEG and many types of brain scans while on and off Marijuana. I think it depends on the individuals substance reaction more then anything.
You know they say that people in comas think and have conscious thoughts but just can't say them or move. Imagine how helpful this would be to them if the computer could fully detect the people's thoughts. An idea for the future.
In the past two decades fMRI and EEG advancements have prevented quite a few people in comas from being euthanized/life-support turned off prematurely. There's a fairly recently defined condition called Locked-In Syndrome, you're absolutely right about this helping identify and treat people who are in a conscious coma state after trauma/stroke etc...
This AS been done. The computer could even reproduce the image the watcher was looking at. It was blurry and it required the watcher to look at the picture for more time, but they said that with faster computers and more brain sensors it WAS achievable... And it was a TEDtalk.
It is not mind reading that you can retrieve stored thoughts that are tied to a signal, Mind reading means that you capture these thoughts yourself and and this is not what you deal with in your experiment. You are already having a computer recall a stored signal and not recognize it yourself.
So most important part of this project was machine learning... You should hire good machine learning engineer then you can classify even 100 categories. (btw i am available next month) 😉😁😄😂
Scientists will need far more detailed brain measurement/diagnostic devices and more efficient machine-learning algorithms for thoughts to be interpreted properly by a computer. If a neuroimaging device could image and measure every single neuron, and every synapse, and measure the electrical signal at each of those in real-time, and all of that information was digitized and capable of being stored in RAM, then a sophisticated machine-learning program should be able to find *way* *more* useful data about our thoughts. There is another TED talk from Summer 2018 in which a woman, who is a neuroscientist and engineer, talks about her team's new brain-imaging device. This device uses red-wavelength laser light combined with holography to allow non-intrusive imaging and measurement of *individual* neurons and synapses in real-time, it has extreme spatial resolution and incredible speed of measurement. Very soon we'll be living in a world where people's entire central nervous systems can be imaged, measured, and interpreted in great detail, if her team's devices are paired with machine-learning algorithms and deep-learning neural networks. This is simultaneously incredibly promising for the advancement of neuroscience, psychiatry, psychology, psychopharmacology, medicine, and deeply frightening if one contemplates the less ethical applications of such technology.
I've looked into the research of a scientist who is developing a genuine lie detection device based on the P300 signal. Apparently their EEG and interpretation software has >90% accuracy, even when someone does what you mentioned and cheats by trying to force other thoughts during the measurement while viewing some image or object. If they recognize that image or object, the P300 signal is seen, no matter how hard one tries to think of something else. Scary, right? Relevant studies: Meijer, E., Ben-Shakhar, G., Verschuere, B., & Donchin, E. (2012). A comment on Farwell (2012): Brain fingerprinting: A comprehensive tutorial review of detection of concealed information with event-related brain potentials. Cognitive Neurodynamics, 7, 155-158. www.ncbi.nlm.nih.gov/pmc/articles/PMC3704663/ www.ncbi.nlm.nih.gov/pubmed/22091554 www.ncbi.nlm.nih.gov/pubmed/21440013 www.ncbi.nlm.nih.gov/pubmed/21965119
Maybe so! The FFA is an area in the inferior temporal region responsible for recognizing faces. When a person sees a familiar face, the FFA tends to be more active. But such small differences in brain activity probably cannot be accurately detected by the EEG. The device they're using (the headband around her head) can only pick up brain waves from her scalp, so it probably isn't a very good representation of all the activity going on in the brain.
You forgot to mention the fact that the human face has lines in it that reflect organic structures and therefore can be represented by patterns associated with beauty and relevance in scenery. And so on between the categories. Resulting in multiple firings of the differing areas of recognition. Also the fact that spontaneity of memories will create thoughts of these different areas of the brain depending on the past of the individual and the image shown. It will be in possible to determine through the EKG waves alone.
That's the reason it's really hard to study the brain because you can't control what someone's thinking and just say we did it in a control environment
Fascinating... But for now they are categorising images into certain similar patterns like face, scenery and so on...... But what makes specificity of these images like types of faces or colour of faces or weather in scenery ?
In top gear, james may drives a small "car" that he controls with his mind. he thinks of a cat, for turning left, and punching a guy, for turning right etc. they "record" the waves as the thought of those things, and the car then reacts when he thinks on those specific things. this is like.. 10 years ago.
With quantum computers, there might be a possibility to take an EEG or some more sophisticated version of it (with a lot more data points) and calculate back the currents that created them. It will probably also need ai/machine-learning. I just made this up but it seems to me like the very thing quantum computers are good at, similar to many-particle-simulations which I know they are very good at (or will be). There is also the holographic principle (yes I watch SpaceTime), so it's not unseen in nature for the information of a volume being related to the surface area containing the volume (brain). Also, I'm stoned, have a nice day.
@@HermanWillems No. The holographic principle states that all information about a volume can be inscribed on its surface (iiuc) therefore it should be possible to calculate back the state of the whole system using only information from its surface (from a helmet-like device). Quantum computers are thought to be very good at simulations where many particles interact and this seemed to me like a similar problem (and normal computers would suck at it for sure).
If you know the model at the exact time of x. But the model constantly changes due to neuroplasticy. How are you going to cope with that? Then every certain moment you need to generate a new model of the neural pattern state. hmmmm still sounds to me as a very difficult problem. A resolution problem, the ammount of possibilities of the brain is not just simple that a neuron is ON or OFF. But also how strong a certain connection of a synaps is. All signals are pulses, so the ammount of possibilities of our brain is huge. Based on travel time, length of synapses. Strength of signal. and much more. I wonder if we ever going to simulate something so complex with some kind of Quantum computer. But who knows. Would be interesting. :)
I can't really follow you but it doesn't matter. I don't plan on doing research on this any time soon. I just thought that it might be worth looking into. You are right the brain is very complex with 86 billion neurons, each with 26 possible power states. I think it might be possible to track the signals running through the neurons because moving charge creates a measurable disturbance in the electromagnetic field (measured by an EEG or my proposed helmet device). Having this model of the disturbances measured in a shell around the brain (a sphere surface) I suspect it might be possible to calculate the origin of each tiny disturbance that contributed to the noisy mess we see on the EEG using physics. This would definitely require a vast amount of computational power but quantum computers are famous for turning unsolvably large problems into smaller problems that normal computers can deal with, maybe even in real time (i.e. similar to many-particle-simulation which deals with possibly even greater numbers). Therefore it would be interesting to see research in that direction. Notice all of the qualifying language. This is by no means a scientific hypothesis (more like a brain fart) but who knows? Maybe somebody will build on it...
This is fascinating! At University of Florida, we've been working on multiple Brain-Computer Interfaces, including world's first ever Brain-Drone Racing and playing musical instruments with mind for people with disabilities.
How do you implement your actuators to neuron interface? You actually drill holes in the patients head and place "outputs" from the brain there? Or just curious.
Even better, look up Yukiyasu Kamitani, a scientist who seems to have come up with AI code that does a surprisingly good job of relaying the image the subject is seeing. You can also search for one of the articles using the search: brain scan can read images. Good for humanity? That's another question altogether.
I though our phones were already reading our minds I always get ads on my apps or recommendations on subjects I had only though of not speak of, and of course I always get the o es on things I had said
Honudes Gai Honudes Gai i never made the assumption of being “unique” all I said was these is nothing new they being doing for a while and I know perfectly how they collect data from all us, so your comment doesn’t really makes sense 😒
Parth Bhavsar Progress is great this has the potential to be a disaster think about it. One day of government organization asked you to come in for an interview during the interview they say that you have some anti-social or anti-government attitudes you told him that’s ridiculous that never happened and they say no this machine just read your mind you are a threat to our society you need to be re-educated What will your response be what this is great progress
This wouldn't work because there is too much information that she in taking in other than the visuals. Along with what she sees, she is also hearing something there is also something like her thoughts, what she remembers in her mind. So that in data is constantly being mixed with other data. This could work if you put stronger imagery, that emites emotion. Something like Fear or Love for example. With stronger data like that, the algorithm could potentially make better data records for stronger guesses.
The limitation isn't with machine learning or AI, the limitation is clearly with the sensors. It's like taking a blurry picture and trying to zoom and enhance. There's only so much you can do if the data you started with sucks. We need a better way to detect neurons firing than outdated EEGs, if we're ever going to get anything useful out of the data.
The brain of a human is already extremely compact. Only some birds have more neurons per square inch. But if you want to connect a sensor to all synapses of each neuron. You can calculate how many sensors you need. Around 80.000.000.000 times 7000(average ammount of synapse connections per neuron) goodluck ! :) With finding space to place those sensors. This is why we cannot connect a "connector" to the human brain. We are not serial machines. But each neuron is a processor of it's own.
Never said it would be easy lol. Maybe if we create some self replicating nano tech that can can pull minerals out of the blood to build new nanites, and then attach themselves to neurons, it can be done. Either way we're a long way from having the technology or the moral inclination to do that kind of thing but i wouldn't say its impossible.
"one day" ... not with the traditional EEG, with each sensor you cover so huge areas what you are never able to read each single neuron. it will always be this kind of guesswork.
wow, some people in this comment section really seem to think that our brains emit waves capable of traveling hundreds of meters after which they could still be picked up by a sensor and correctly interpreted by an artificial neural network.
For the Machine learning it might be interesting to try lstms or grus and maybe go into a deep memory network. I am wondering how advanced their machine learning is.
I have seen what they can do with googles TensorFlow which is far more advanced. So i think it's just that probably... You can use it to. Just download tensorflow and start writing some python and some GUI. Also you need a GPU to even have some kind of performance.
I have a suggestion your feeding her the images to fast. Give them 1 to 2 mins for each video if you don’t want to cheat next time. Like this experiment try doing it again.
With 6 months of R&D I see this being extremely useful to police. It should be able to eventually tell between a face you know and one you don't. Imagine how useful it would be if police could just say "If you"ve never seen this person before, take this test" Or even when trying to find a criminal, don't ask the person which picture it was. Just read their brain so it can't be wrong.
Really cool work but as you said I can tell if it's a face or something else just by that big dip... It's not really doing anything beyond what we could do before.
Why does the "picture ready" block in the bottom left corner flashing differently for each category? I mean that black & white flashing used in the last test to "cheat", for me it seems now that you told the category to the computer with the flashing sequence... Instead of a "start monitor" signal it should be
That is very interesting sir I perhaps we all can get together and change some knowledge however I am a Scientist and I am in the process of feeding information into the brain thank you
One EEG sensor is VERY LOW resolution. It measure probably like a massive area of your brain. You need single neuron signals to even do something like you propose. Probably can do that in few hundred years.
They're using some computer program on me that reads the person's mind, and they're changing my whole face and body. I'm looking for information on the internet about this. Please help me.
EEG says something. But the resolution is VERY LOW. As if you want native resolution you need a connector to each connecting synapse. We have around 80 billion or more neurons. Echt neuron has about 7000 synaps connections. Troughout the brain in 3D. And no EEG will not give you such information about thoughts. But it can give us super low resolution information on which part of our brain is active. Using Neural networks.. ( technology borrowed from how our brain works) to reverse pattern match it is possible. But only simple things can be done like in this video. Converting EEG to a realtime "neural image" is.... far from possible.
to think that not so long ago we were only hunters and gatherers. The future is an increasingly terrifying and dehumanized you materialistic, bloody consumer.
This is a classic neural network problem - a result and data connected to the result. Just set up the network and train it. Stop trying to preprocess the data. Let the network figure it out. This is trivial compared to identifying cancers from x-ray data.
Does anybody happen to know why I can find any information or circuitry schematics regarding the stacked boards used in the (multi-channel?) EEG recording pictured at 1:33? I am assuming that it is probably some variation of their 'Backyard Brains' designs, but they look fairly different in this video... 🤔 I am in desperate need of a cost-effective, multi-channel EEG recorder for my work. Any information would be *greatly* appreciated!
welll you cant read mind bu just reading outer signals that reflect the outer part of head skin. thats similar to trying to recognise images on computer screen by looking at reflection on the wall in the dark :P i guess it can be done tough nano robotics that catch and store signals ie like nanorobotc modems so you can easily put a nano modem on required signal lines. that way lots of information gan be gained. well partially its allready done for deafs or blinds electronic modules that translate audio or video.
0:14 I listening "machines future that can read our thoughts", but in translate "machines that can read our thoughts", even on TED.com too, Can anybody hepl me explain this issue? Thanks.
@@Markcus003 oh this is weird lmao when I watched it as soon as it got uploaded it wasn't monetized nor where there recommends I thought this was age restricted but it's fixed now lol
Interesting, I am working for a while now on something similar but a bit deeper and more complex. I also use an EEG (OpenEEG Cython 8 channel) to monitor brain waves but plan to combine it with other biometrical and sensory data (ekg, skin resistance, head/eye-trakcing, environment audio and video) to improve my cognitive performance and in the long run to automate certain cognitive processes.
Ok so you are picking up data from the output of the body into software. In what way you trow information back at the body? Sound? Monitor? (I mean i see our brain as a black box which is nothing without the outside world.)
Oh. the asian guy is a korean chirofractor. I learned chinese medicine history from him. And to study neuroscience should you speak with scientists. Dont make your way with outsiders if you want to find truth. Every study related with chirofractor failed in korea. A lot of korean physicists who study with chirofractor gave up before 1 year. Especially if you want to study neuroscince within other department, contact me. I will give my lecture note that blackhole and brain have same network processing. And I can give my lecture source too. This is the topic of neuroscience today.
Anyone who believes this is a great advancement in technology is also telling you they trust people they don’t know they trust people they cannot control and they trust their government no matter what they do or say that is the beginning conversation in the end of a society when you trust everybody
You were explaining before that there is a visible dip after the P100 for faces. So machine learning is probably not needed for spotting that.
I think it is a mistake to average out all sensors. A lot of localisation information is lost. But I see great potential in applying machine learnining here.
There is amazing progress in deep neural networks learning to remove noise on images for example.
Looking forward.
By localisation information, do you mean details? As in the subject is not looking at a face, but an eye for instance?
@@paulsawford7827 I mean in the sense of the algorithm being able theoretically to localize different areas in the brain. Recognizing a threedimensional pattern in the brain if you will. I think this is only possible if you provide the information of each individual sensor. If you make an average out of all a dimension is lost maybe. Just a thought.
WTF!! Age restricted??!
Edit: thanks for removing the restriction :-)
Feeling is mutual
The Mosquitoes one too... What?
@@Honeybreee dragonflies also.....
@abhay patel age restrictions is removed
1. add more test subjects
2. add more difficulty images, eg. man faces vs woman faces (no house, no scenery)
3. focus on occipital lobe, subject can only use one of his/her dominant eyes
4. add splitting image (eg. solid orange colour) for 1-2 secs after each face pictures
I can only imagine how helpful this would be for physically handicapped people.
Rae Seaweed .000123% of the population will set the standard for how we all live that is insanity. And how can you verify what’s being discovered in the brains of these handicap people so when the government comes out and tells you these 50 people here who are in a vegetative state in these beds have asked to be euthanized you’ll say no problem how scary is that
As someone who is bipolar (not quite what you were mentioning) I was excited to see this for that reason
Why? This is a closed loop training system. The neural networks learns what pattern fits to what image. It knows the image in the training stage. After that it has created a model. It's not something magical or something. Nor it can help any handicapped person.
It is the groundwork for more advanced programs that could help with those problems.
As the mother of a child who is intelligent but non-verbal, I think I would give my last breath to have her ACTUAL thoughts come out onto a computer screen
You tried to decode in an afternoon what her brain coded for itself through trial and error for her entire life. Of course it didnt work. Also, there should be constant noise activity in ones brain even while idle. Showing the same face twice may also cause different reactions and brain patterns, which may mean the only signal you were able to pick up was from a lower function facial recognition response.
True
But what will the computer do if I'm stoned?🤔🤔🤔
TheWarriorLP16 People who are under the influence of drugs are much easier to read than people who are sober
Not necessarily. If the recognition algorithm is finely tuned and especially if machine learning is used the influence of some drugs could completely ruin the measurements. AFAIK, a brain has radical drops of alpha waves and heightened levels of some other types on DMT.
💀
dellort tog uoy Who’s talking about drugs
Cannabis is a muscle relaxer so in theroy it will likely clarify information but on the flip side it may show chaotic graphing in some people. I've had EEG and many types of brain scans while on and off Marijuana. I think it depends on the individuals substance reaction more then anything.
The girl in the thumbnail pic looks like she is out of a painting
Aaron Cameron - Jan ver meer, maybe, girl with pearl earring?
@@jamesonpace726 yep that's the one
@@aaroncameron1494 1:15
You know they say that people in comas think and have conscious thoughts but just can't say them or move. Imagine how helpful this would be to them if the computer could fully detect the people's thoughts. An idea for the future.
In the past two decades fMRI and EEG advancements have prevented quite a few people in comas from being euthanized/life-support turned off prematurely. There's a fairly recently defined condition called Locked-In Syndrome, you're absolutely right about this helping identify and treat people who are in a conscious coma state after trauma/stroke etc...
This AS been done. The computer could even reproduce the image the watcher was looking at.
It was blurry and it required the watcher to look at the picture for more time, but they said that with faster computers and more brain sensors it WAS achievable... And it was a TEDtalk.
It is not mind reading that you can retrieve stored thoughts that are tied to a signal, Mind reading means that you capture these thoughts yourself and and this is not what you deal with in your experiment.
You are already having a computer recall a stored signal and not recognize it yourself.
How long did you train the algorithm ?
This is so cool!! i can see a future on this
So most important part of this project was machine learning...
You should hire good machine learning engineer then you can classify even 100 categories.
(btw i am available next month) 😉😁😄😂
Scientists will need far more detailed brain measurement/diagnostic devices and more efficient machine-learning algorithms for thoughts to be interpreted properly by a computer. If a neuroimaging device could image and measure every single neuron, and every synapse, and measure the electrical signal at each of those in real-time, and all of that information was digitized and capable of being stored in RAM, then a sophisticated machine-learning program should be able to find *way* *more* useful data about our thoughts. There is another TED talk from Summer 2018 in which a woman, who is a neuroscientist and engineer, talks about her team's new brain-imaging device. This device uses red-wavelength laser light combined with holography to allow non-intrusive imaging and measurement of *individual* neurons and synapses in real-time, it has extreme spatial resolution and incredible speed of measurement. Very soon we'll be living in a world where people's entire central nervous systems can be imaged, measured, and interpreted in great detail, if her team's devices are paired with machine-learning algorithms and deep-learning neural networks. This is simultaneously incredibly promising for the advancement of neuroscience, psychiatry, psychology, psychopharmacology, medicine, and deeply frightening if one contemplates the less ethical applications of such technology.
What if she "cheats" and strongly focuses on let's say a face when a scenery is shown?
I've looked into the research of a scientist who is developing a genuine lie detection device based on the P300 signal. Apparently their EEG and interpretation software has >90% accuracy, even when someone does what you mentioned and cheats by trying to force other thoughts during the measurement while viewing some image or object. If they recognize that image or object, the P300 signal is seen, no matter how hard one tries to think of something else. Scary, right?
Relevant studies:
Meijer, E., Ben-Shakhar, G., Verschuere, B., & Donchin, E. (2012). A comment on Farwell (2012): Brain fingerprinting: A comprehensive tutorial review of detection of concealed information with event-related brain potentials. Cognitive Neurodynamics, 7, 155-158.
www.ncbi.nlm.nih.gov/pmc/articles/PMC3704663/
www.ncbi.nlm.nih.gov/pubmed/22091554
www.ncbi.nlm.nih.gov/pubmed/21440013
www.ncbi.nlm.nih.gov/pubmed/21965119
@@metanumia wow
This is awesome but theres always more to find out and learn in the near future
This might be a stupid question but do the waves change from person to person of the way they see an image or do they remain similar?
Maybe so! The FFA is an area in the inferior temporal region responsible for recognizing faces. When a person sees a familiar face, the FFA tends to be more active. But such small differences in brain activity probably cannot be accurately detected by the EEG. The device they're using (the headband around her head) can only pick up brain waves from her scalp, so it probably isn't a very good representation of all the activity going on in the brain.
You forgot to mention the fact that the human face has lines in it that reflect organic structures and therefore can be represented by patterns associated with beauty and relevance in scenery. And so on between the categories. Resulting in multiple firings of the differing areas of recognition. Also the fact that spontaneity of memories will create thoughts of these different areas of the brain depending on the past of the individual and the image shown. It will be in possible to determine through the EKG waves alone.
That's the reason it's really hard to study the brain because you can't control what someone's thinking and just say we did it in a control environment
Fascinating... But for now they are categorising images into certain similar patterns like face, scenery and so on...... But what makes specificity of these images like types of faces or colour of faces or weather in scenery ?
Let us take a moment to appreciate KRISTI’S dedication
Awesome! More of these videos please!!
In top gear, james may drives a small "car" that he controls with his mind. he thinks of a cat, for turning left, and punching a guy, for turning right etc. they "record" the waves as the thought of those things, and the car then reacts when he thinks on those specific things.
this is like.. 10 years ago.
With quantum computers, there might be a possibility to take an EEG or some more sophisticated version of it (with a lot more data points) and calculate back the currents that created them. It will probably also need ai/machine-learning.
I just made this up but it seems to me like the very thing quantum computers are good at, similar to many-particle-simulations which I know they are very good at (or will be). There is also the holographic principle (yes I watch SpaceTime), so it's not unseen in nature for the information of a volume being related to the surface area containing the volume (brain).
Also, I'm stoned, have a nice day.
Ah you want to connect all synapses of each neuron to a sensor? :) 80.000.000.000 * 7000 connections you got there. Goodluck boy.
@@HermanWillems No. The holographic principle states that all information about a volume can be inscribed on its surface (iiuc) therefore it should be possible to calculate back the state of the whole system using only information from its surface (from a helmet-like device). Quantum computers are thought to be very good at simulations where many particles interact and this seemed to me like a similar problem (and normal computers would suck at it for sure).
If you know the model at the exact time of x. But the model constantly changes due to neuroplasticy. How are you going to cope with that? Then every certain moment you need to generate a new model of the neural pattern state. hmmmm still sounds to me as a very difficult problem. A resolution problem, the ammount of possibilities of the brain is not just simple that a neuron is ON or OFF. But also how strong a certain connection of a synaps is. All signals are pulses, so the ammount of possibilities of our brain is huge. Based on travel time, length of synapses. Strength of signal. and much more. I wonder if we ever going to simulate something so complex with some kind of Quantum computer. But who knows. Would be interesting. :)
I can't really follow you but it doesn't matter. I don't plan on doing research on this any time soon. I just thought that it might be worth looking into. You are right the brain is very complex with 86 billion neurons, each with 26 possible power states. I think it might be possible to track the signals running through the neurons because moving charge creates a measurable disturbance in the electromagnetic field (measured by an EEG or my proposed helmet device). Having this model of the disturbances measured in a shell around the brain (a sphere surface) I suspect it might be possible to calculate the origin of each tiny disturbance that contributed to the noisy mess we see on the EEG using physics. This would definitely require a vast amount of computational power but quantum computers are famous for turning unsolvably large problems into smaller problems that normal computers can deal with, maybe even in real time (i.e. similar to many-particle-simulation which deals with possibly even greater numbers). Therefore it would be interesting to see research in that direction.
Notice all of the qualifying language. This is by no means a scientific hypothesis (more like a brain fart) but who knows? Maybe somebody will build on it...
@@Zahlenteufel1 yeah one day it will be built.... Can i connect with you brother!?
not cheating, it's setting rules.
Live and learn that's what it's all about, keep on testing the only failure is when you quit and no longer try.
This is fascinating! At University of Florida, we've been working on multiple Brain-Computer Interfaces, including world's first ever Brain-Drone Racing and playing musical instruments with mind for people with disabilities.
How do you implement your actuators to neuron interface? You actually drill holes in the patients head and place "outputs" from the brain there? Or just curious.
We've used non-invasive EEG options for all our projects. This reduces risk and is suitable for many applications
Even better, look up Yukiyasu Kamitani, a scientist who seems to have come up with AI code that does a surprisingly good job of relaying the image the subject is seeing. You can also search for one of the articles using the search: brain scan can read images. Good for humanity? That's another question altogether.
Scary enough 😳 imagine a century later, this tech would get to the full level (the machine becomes a mind reader) and someone uses it to read ours 🧠
Seems you have something to hide in your dirty little mind do you?
@@HermanWillems
OR maybe yours is dirty so you think other's is
Just a joke dude. :) Don't take so seriously ok.
@@HermanWillems
No tone or whatsoever as a hint 🤔
That crazy blinking square in the corner during first experiment struck me... Planned failure huh?
I though our phones were already reading our minds I always get ads on my apps or recommendations on subjects I had only though of not speak of, and of course I always get the o es on things I had said
Thats bc of a bunch of data google collects based on past searches and stuff.... Well i guess that sort of is mind reading
Honudes Gai Honudes Gai i never made the assumption of being “unique” all I said was these is nothing new they being doing for a while and I know perfectly how they collect data from all us, so your comment doesn’t really makes sense 😒
Great progress
I can see future sometimes in my eyes
Parth Bhavsar Progress is great this has the potential to be a disaster think about it. One day of government organization asked you to come in for an interview during the interview they say that you have some anti-social or anti-government attitudes you told him that’s ridiculous that never happened and they say no this machine just read your mind you are a threat to our society you need to be re-educated What will your response be what this is great progress
See every technology has both side positive and negative.
For example AI
@1:23 Surprise picture of young Pep Guradiola hahahahaha
This wouldn't work because there is too much information that she in taking in other than the visuals. Along with what she sees, she is also hearing something there is also something like her thoughts, what she remembers in her mind. So that in data is constantly being mixed with other data. This could work if you put stronger imagery, that emites emotion. Something like Fear or Love for example. With stronger data like that, the algorithm could potentially make better data records for stronger guesses.
Give it time humanity is still young
The limitation isn't with machine learning or AI, the limitation is clearly with the sensors. It's like taking a blurry picture and trying to zoom and enhance. There's only so much you can do if the data you started with sucks. We need a better way to detect neurons firing than outdated EEGs, if we're ever going to get anything useful out of the data.
The brain of a human is already extremely compact. Only some birds have more neurons per square inch. But if you want to connect a sensor to all synapses of each neuron. You can calculate how many sensors you need. Around 80.000.000.000 times 7000(average ammount of synapse connections per neuron) goodluck ! :) With finding space to place those sensors. This is why we cannot connect a "connector" to the human brain. We are not serial machines. But each neuron is a processor of it's own.
Never said it would be easy lol. Maybe if we create some self replicating nano tech that can can pull minerals out of the blood to build new nanites, and then attach themselves to neurons, it can be done. Either way we're a long way from having the technology or the moral inclination to do that kind of thing but i wouldn't say its impossible.
And these nanites transfer the information trough deterministic wifi. :) Who knows !!
We can use it to read what a person is dreaming, and use it with people that cannot wake up.
Nice. We've hit the tip of the iceberg. This is exciting! :D
Loved that.
"one day" ... not with the traditional EEG, with each sensor you cover so huge areas what you are never able to read each single neuron. it will always be this kind of guesswork.
wow, some people in this comment section really seem to think that our brains emit waves capable of traveling hundreds of meters after which they could still be picked up by a sensor and correctly interpreted by an artificial neural network.
For the Machine learning it might be interesting to try lstms or grus and maybe go into a deep memory network. I am wondering how advanced their machine learning is.
I have seen what they can do with googles TensorFlow which is far more advanced. So i think it's just that probably... You can use it to. Just download tensorflow and start writing some python and some GUI. Also you need a GPU to even have some kind of performance.
I have a suggestion your feeding her the images to fast. Give them 1 to 2 mins for each video if you don’t want to cheat next time. Like this experiment try doing it again.
With 6 months of R&D I see this being extremely useful to police. It should be able to eventually tell between a face you know and one you don't. Imagine how useful it would be if police could just say "If you"ve never seen this person before, take this test" Or even when trying to find a criminal, don't ask the person which picture it was. Just read their brain so it can't be wrong.
This is so amazing!
This is great. TY TED :)
Parallel thoughts at the time of watching the images provided??like going to a grocery store after the session affects?
WoW! Nice! Near future !
But we could see with our eyes the difference between the data from the face picture and the data from the scenery. It was a big difference.
Really cool work but as you said I can tell if it's a face or something else just by that big dip... It's not really doing anything beyond what we could do before.
Does the computer has a "time" activity control? I am pretty sure that the human brain knows what is going to be on the screen before it appears...
Why does the "picture ready" block in the bottom left corner flashing differently for each category? I mean that black & white flashing used in the last test to "cheat", for me it seems now that you told the category to the computer with the flashing sequence... Instead of a "start monitor" signal it should be
That is very interesting sir I perhaps we all can get together and change some knowledge however I am a Scientist and I am in the process of feeding information into the brain thank you
I have seen a better TED video on brain reading everyone has forgotten about it seems.
Now you can make a computer that generate that signals when see something with their bionic eyes. A step closer to the digital mind.
One EEG sensor is VERY LOW resolution. It measure probably like a massive area of your brain. You need single neuron signals to even do something like you propose. Probably can do that in few hundred years.
What would happen if they change the subject? Probably the waves would be different from every different human doesn't it?
3 uploads at once
They're using some computer program on me that reads the person's mind, and they're changing my whole face and body. I'm looking for information on the internet about this. Please help me.
This + AI will be crazy
This is what we do..
Congratulations humans....🙉💟💟
EEG says something. But the resolution is VERY LOW. As if you want native resolution you need a connector to each connecting synapse. We have around 80 billion or more neurons. Echt neuron has about 7000 synaps connections. Troughout the brain in 3D. And no EEG will not give you such information about thoughts. But it can give us super low resolution information on which part of our brain is active. Using Neural networks.. ( technology borrowed from how our brain works) to reverse pattern match it is possible. But only simple things can be done like in this video. Converting EEG to a realtime "neural image" is.... far from possible.
Damn that's cool
to think that not so long ago we were only hunters and gatherers. The future is an increasingly terrifying and dehumanized you materialistic, bloody consumer.
why is this age restricted
Someone is using this tech one without my permission..
How do I stop them
This is a classic neural network problem - a result and data connected to the result. Just set up the network and train it. Stop trying to preprocess the data. Let the network figure it out. This is trivial compared to identifying cancers from x-ray data.
I like Christy's face, I wonder what that would look like.
The Detroit Become Human era isn't that far away now...
I like to see them try with my split personality good luck
Amazing
try colors
Awesome
interesting
Does anybody happen to know why I can find any information or circuitry schematics regarding the stacked boards used in the (multi-channel?) EEG recording pictured at 1:33? I am assuming that it is probably some variation of their 'Backyard Brains' designs, but they look fairly different in this video... 🤔 I am in desperate need of a cost-effective, multi-channel EEG recorder for my work. Any information would be *greatly* appreciated!
i could code a game with my mind if perfected i hope
welll you cant read mind bu just reading outer signals that reflect the outer part of head skin.
thats similar to trying to recognise images on computer screen by looking at reflection on the wall in the dark :P
i guess it can be done tough nano robotics that catch and store signals ie like nanorobotc modems so you can easily put a nano modem on required signal lines. that way lots of information gan be gained. well partially its allready done for deafs or blinds electronic modules that translate audio or video.
“But daaaammn” I feel like this guy hangs out with Bill Nye. That or they should be friends.
Cheers everyone age restriction is now removed!!!!! - ☺
Now , you just think of scenery and see if it is telling right or not ....... and in fact it is actual mind reading
Did anyone else think that the girl in the thumbnail WAS the computer?
"there is information". Wait, this experiment did not prove it. 🙄
Is it generalizing or just memorizing?
For now probably generalizing
Sister Irrine?
wtf i think the exactly thing Mr. Gage tell at the end!!!!!! be carefully. (we need to saw also that we cannot stop the technology transition).
0:14 I listening "machines future that can read our thoughts", but in translate "machines that can read our thoughts", even on TED.com too, Can anybody hepl me explain this issue? Thanks.
put EEGs on Kaggle! "Nirvana"'s coming! :)
Someday traffic signs Captcha, someday.
Why am I not getting any recommended vids
Update youre UA-cam I guess
@@Markcus003 oh this is weird lmao when I watched it as soon as it got uploaded it wasn't monetized nor where there recommends I thought this was age restricted but it's fixed now lol
@@ziadahmedsamy sometimes UA-cam are broken
Someone is using this tech on me with out my permission how do I stop them
the difference between face and others is already obvious that you dont need ML to do this, try two other types of photos other than face.
But you narrowed it to 50/50 chance. You need to keep it at 4 choices at least
Interesting, I am working for a while now on something similar but a bit deeper and more complex. I also use an EEG (OpenEEG Cython 8 channel) to monitor brain waves but plan to combine it with other biometrical and sensory data (ekg, skin resistance, head/eye-trakcing, environment audio and video) to improve my cognitive performance and in the long run to automate certain cognitive processes.
Ok so you are picking up data from the output of the body into software. In what way you trow information back at the body? Sound? Monitor? (I mean i see our brain as a black box which is nothing without the outside world.)
Oh. the asian guy is a korean chirofractor. I learned chinese medicine history from him. And to study neuroscience should you speak with scientists. Dont make your way with outsiders if you want to find truth. Every study related with chirofractor failed in korea. A lot of korean physicists who study with chirofractor gave up before 1 year. Especially if you want to study neuroscince within other department, contact me. I will give my lecture note that blackhole and brain have same network processing. And I can give my lecture source too. This is the topic of neuroscience today.
I say "Don't think about elephants..." what do you think about??
Wow this and voice to skull technology and FBI and other law enforcement agencies and volunteers thank you. Thumbs up
yeah, azimov was ahead of his time, but this probably isn't a place where he was right.
A lot of effort to get about one bit per second channel capacity.
Can the researcher tell if the subject is looking at hotdogs or not hotdogs?
Is that a good idea tho
Cool
Anyone who believes this is a great advancement in technology is also telling you they trust people they don’t know they trust people they cannot control and they trust their government no matter what they do or say that is the beginning conversation in the end of a society when you trust everybody