No, because it’s not the level of intelligence that is the battleground. Compassionate listening is one place to start because something happened to that person that made them crawl into their cave and shelter from the community. Just be willing to look deeper.
I'm tired of the "AI" branding of what is literally just Machine Learning. As amazing as the technology is, it's far away from actually being an Artificial Intelligence. What we have now is amazing pattern recognition software that does what we ask it to do. No intelligence by the code, only intelligence by the developers. Basically, it cannot grow beyond what it's been trained to know.
This fact makes me grateful, a real “sentient computer system” they are branding machine learning algorithms as would be one step too close to AM territory than i’m willing to accept
What people like you need to realize is, it doesnt matter if it "iSnT rEaLlY iNtElLigEnT", it gets more and more powerful and in a few years people will be shocked by the capabilities. Pattern recognition may be enough.
@@finalmage6People used to say the same thing about making flying machines. Seems like you just want to argue semantics about what true intelligence entails. Who cares? We keep making these things as powerful as we can, and eventually that's going to be pretty damn powerful. No matter how you define the actual mechanics involved.
How many people from the UK, when Joe said, 'MeerKAT found SAURON,' deeply expected a clip to appear of Aleksandr saying, 'Simplez!' I was quite disappointed.
Excellent video as always :) RE: Explainability / back box problem - I recently watched one of Anastasi in Tech's vids where she covered AlphaAleph , a European AI firm doing work along these lines. The related paper, "AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation" is a super compelling read if you're interested
It won't be compelling for long. A new kind of neural network has been invented and will be announced this year. It ends the black box problem completely.
@@chunkyMunky329 What is it called? Who invented it or who will announce it? I can't find any info about it online. If it's really a silver bullet that will completely solve one of AI's biggest problems, I'd expect info about it to be easier to find. Unless there's only rumors about this and no reliable sources yet.
@@raizin4908 The reason you can't find anything is because it is not coming from an expected group of people. They are not scientists. Just programmers. We've become closed minded about the notions of "reliable" sources concerning AI and forgotten that anybody can become good at programming and create something amazing if they have the intelligence, creativity and a decent computer. But computer scientists refuse to accept this idea. They have gone scorched earth on their competitors and created a culture where no alternatives can get any funding and therefore if any alternative is even 99% complete they can't get any reliable source to believe in their project and confirm its credibility. It can only gain credibility if the new project can 100% prove itself to be superior to a neural network. I cannot say any specifics right now because the people involved want to get funding first so that they can afford some security. But that shouldn't take long. They expect to complete their demo this week and be talking to investors next week. Once the first investor transfers the money, an announcement will be made and it will surely be picked up by media. All I can say is that this is not coming from the northern hemisphere. And it was not invented by a white person.
I like that you take the time to 'plate' your Factor meals for the add sponsor segment when we all know that you're eating that directly from the packaging to save on washing dishes... I know this because i do the exact same thing. 😂😂😂😂😂😂
I'm always excited to hear about the scrolls projects - my PhD at Queen Mary University of London was in a similar vein, using lab micro-CT scanners to gather data at 15-micron voxels, for virtual unrolling by colleagues at the University of Cardiff. The difference in our case was not having to use a synchrotron to generate the X-rays - beam time on a synchrotron is horrifically difficult to come by, while time on a micro-CT scanner is cheap enough that my post-doc friend used to do single-shot scans of his lunch for fun (and to make sure there were no worms in the fruit.)
The benefit a synchrotron gives you on the other hand is very high levels of X-ray illumination that are at a single energy, meaning certain types of artifact in the CT scan just don't happen and you get a very good signal-to-noise ratio.
@@countertony It sounds like we get what we pay for, and in some circumstances, we don't need to pay for much. Just to be sure that I understand, during coffee breaks, the cost of using the machine is just the cost of electricity?
Do you happen to know anything about ("ultra wideband") UWB imaging? The capabilities seem robust, for miniscule devices operating at emission levels on par with common household appliances; fetuses seem to notice UWB pulses even less than a sonogram (as in, not at all)? I guess there's a baby monitor that monitors heartbeat & breathing by radar, & a biometric ID radar system, & a number of chips designed for monitoring patients from across a room... but I haven't been able to find out much detail about the state of the technology overall & it seems like medical is the main field where UWB imaging is making inroads with commercial products. Just wondered if you'd heard of it / seen it / used it?
@@prophetzarquon1922 UWB imaging sounds interesting but also tricky. UWB typically uses lower frequencies/longer wavelengths than imaging technologies which are better at penetrating materials but when used for radar/imaging has a lower resolution and when used for data has a longer latency than higher frequency/shorter wavelength technologies. For the purposes of detecting heart rate and/or breathing you don't really need a high resolution so I could see it working for that. That said I'm not sure how well it would work from a distance, measuring inside a womb would probably require the device to be fairly close to have high enough penetration and resolution. Detecting heart rate or breathing based on external movement might be possible but potentially prone to errors (maybe AI can help with that). Depending on the frequencies used I could see it working for some biometric techniques. I presume the chips for working across a room go on the patients skin and use UWB as both a power source and a data channel then either use UWB/radio imaging or more likely some other low power monitoring technology. UWB data communication has been around almost as long as Bluetooth, in fact it's superior to Bluetooth and can be used for all the same applications with higher bandwidth, lower latency, and better security but since it came out a few years after Bluetooth did, Bluetooth already had too much momentum and manufacturers weren't willing to add support for another technology when Bluetooth was good enough which of course only made the momentum problem even worst. I think a lot of newer car fobs actually use UWB since it's better at blocking relay attacks.
@@grn1 The systems purpose-built for monitoring from across a room \ through a (flimsy) wall, are generally fixed units with external power supplied. As an ultra-low emission asset, UWB radar saw robust development by military research, so medical devices get a head start from that. ... That said, _some_ of the systems researched are literally just a WiFi or femtocell router, either running custom firmware or dumping data to a picocomputer. For heartbeat detection, I believe GHz range was used, but at least one system was operating in ~900MHz. Some multi-band SDR models are used by home experimenters, but I don't think I've seen UWB mentioned on consumer\commercial stuff running higher than 20GHz?
I really appreciate Joe’s ability to be neutral on polarising topics. He acknowledges the important facts and disclaimers, and room for everyone to form their own opinion. Super impressive!
His silly, unnecessary Ted Cruz bashing is certainly not neutral. If a Conservative channel host bashed a Cuban Lib, for example, Libs would cry 'ricisim' and unsub in droves. I don't think Joe is ricist, but I invite all Republicans to unsub in protest. You can always re-sub later.
We have not created real AI yet, just wanted to point that out, what we call A.I. is not really A.I. at all, but a buzz word because it is feels so much better than the cumputer assistants we had in the past, like Siri and google assist. they can fool people into thinking they are writing coherent sentences, but that's only because they are using educated guesses on what word comes next, based on existing work.
The problems of consciousness are fascinating. I graduated with a degree in cognitive science in the year 2000... and while it has had zero relevance to my professional life since then, other than the computer science component of my coursework, I discovered some long-standing intractable problems that I've repeatedly returned to over the years. Something related to artificial intelligence that I've thought about regularly is whether or not AI can solve (or even comprehend) either problem without itself being fully conscious in the same way we are... and then I think about the Matrix, Skynet (Matrix), and Butlerian Jihad (Dune), and I wonder if satisfying our curiosity is worth the risk... and then I think about proving the Riemann Hypothesis, and immediately think "yes, it absolutely is."
I can imagine probing every node in an NN and recording their values over time. Almost all frameworks are made of smaller chunks, easier to determine their influences. That's still a hell of a lot of nodes but it's possible. Then since there's just so much data, we'd be using AI to find the patterns, translate and track what another NN is doing and how predictably different weight patterns emerge.
Its not enough to "record the values". The nodes values are not independent entities. They are relationships between other nodes. Which means you have to analyze a billion-factorial relationships and decide how to express that in terms that humans can understand. Good luck with that. I don't think my calculator can even express a number as big as a billion-factorial, so I can't imagine how the neural network could even express the results to you. Are there even words in our language to describe such patterns? Also, where are you going to find training data that is labelled with all the trillions of possible patterns? You do realize that you need training data right? Whose job is it to create that?
@@KastorFlux Yeah but the problem with looking at "distributions" is that, yes it will make your task more manageable to keep things more general, but this is impossible to lead to the kinds of transparency people are seeking. Nodes represent logic but in a numerical abstraction. So, if you conflate millions of logical relationships into one observation, it is like taking a million lines of (software) code and trying to explain what is happening in one line of text. Its not useful to anyone.
The actual main problem with machine learning for science is that it's not so much artificial intelligence as artificial gut instinct. It _is_ useful as a quick-and-dirty way of finding promising directions to explore, where to look / what molecules to try etc., but that works best when there's an overwhelming amount of data and you're at loss of what hypotheses to start with. It only becomes actual science if there's a way to validate the correctness. Specifically something like deciphering a language with little data available I would approach with a good amount of skepticism. It's way too likely that AI hallucinate a way it _could_ be interpreted, but actually differs completely from the true meaning.
I mean, artificial intelligence is an extremely broad term that absolutely encompasses things like machine learning. "intelligence" doesnt mean human level intelligence, rather than the ability to make decisions. Even the simplest computer programs can be "AI". Any program which can take in input, evaluate that input, and give variable output based on that input qualifies as AI. A simple goomba in mario bros which turns around when it hits a wall is AI. But that is different than AGI, which is artificial GENERAL intelligence. That is what people often actually think of when they think of AI. Machine learning is not AGI. Its AI, but not AGI. Also, Id argue against the idea that machine learning is just "gut instinct" as well. Its a lot more nuanced than that. its a decision making algorithm which is modified by experimentation, which is molded by pressures from some reward algorithm. In fact, this is very similar to how WE gained intelligence through the process of evolution. Thats why one of the main methods to train machine learning neural networks is with something called "a genetic algorithm", because its based on how evolution and natural selection work. Humans are just the result of tiny random mutations over billions of years, where the best changes stick around and propagate. Just on a much larger and more complex scale. But our brains and intelligence are fully a consequence of those random beneficial mutations just like how machine learning works at its core. The big difference is that our brains are significantly more complex, and allow us to apply our intelligence in many different ways, making us a general intelligence. (although a natural general intelligence as opposed to an artificial one). There is a decent chance modern style machine learning will eventually lead into developing the first AGI. Maybe some other novel technique may be developed that leads to it instead, but we're likely already on the right path. AI is pretty intelligent, the issue is usually more so with the alignment and specification problem of telling an AI what it's goal is rather than the AI's actual ability to adapt its intelligence to solve a particular problem. Learning how to properly specify the goal to the AI, and making sure the AI retains that goal as desired are the MAIN areas for modern AI development that we are struggling with right now. AI are currently extremely good at doing whatever goal we give them, the issue is specifying what it is we actually want them to do well enough for them to actually train to do that thing.
@@eragon78 no, the difference is not that our brains are significantly more complex. The current generation of machine learning models are already similar in complexity, and in the not so far future they will utterly eclipse the complexity of the human brain. But you're right, the way humans learn by nature is indeed not so different from data-driven machine learning: we also default to reason by intuition of the "gut instinct" kind. However nobody would take you seriously in science if you based your conclusions on that. Because history has shown that this is too fallible in the long run, however useful it can be for solving the problems right at hand. Instead we use formal mathematical frameworks to build theories, and the scientific method to put them to the test. (AI _could_ do that as well, but the vast majority of machine learning systems in current use do nothing of the kind.)
@@leftaroundabout The number of neural connections in our brain are far higher than that in modern neural networks. Quote]However nobody would take you seriously in science if you based your conclusions on that. Because history has shown that this is too fallible in the long run, however useful it can be for solving the problems right at hand. Instead we use formal mathematical frameworks to build theories, and the scientific method to put them to the test. (AI could do that as well, but the vast majority of machine learning systems in current use do nothing of the kind.) ] The fact we can reason and decide to even use formal mathematics shows the difference between Humans and AI. We are very prone to our instincts, but at the end of the day Humans are still General Intelligences, which modern AI are simply not. Thats the REAL difference. Humans can actually reason enough to know that our "gut instincts" are wrong. AI doesnt know that. AI doesnt act based on gut instincts, it acts based on past experiential data that it did. Its structured basically through a hill climbing algorithm which is what natural selection is. The issue is usually one of a specification problem, combined with the faults of hill climbing algorithms. To begin with, its extremely hard to specify to AI what humans actually want from it. This means an AI can only do things which we are capable of specifying to it in one way or another. Most advancements in AI in the last 5-10 or so years have been advancements in how we solve that specification problem. Errors that often arise in AI are usually a result of the AI doing what we told it to do rather than what we WANT it to do, which are usually not the same thing. For example, ChatGPT doesnst give correct answers for everything its asked because its not TRYING to do that. Its trying to predict what text comes next, not what response is factually correct to the question it was asked. This isnt a problem with how ChatGPT thinks, nor is it a problem of it acting on its guts or instincts. Its a specification problem on our ends of getting ChatGPT to do exactly what we want it to do. AI can use formal mathematics just fine, but its not going to use stuff like that if thats not what it needs to solve a particular problem that we gave it. The issue is most problems we use AI for are more complicated than just plugging in some equations to get an answer. Machine learning doesnst make the AI do formal mathematics for stuff, because thats not really useful in actually solving the problem its been given. How does formal mathematics for example make a good image of buzz lightyear flying off into the sunset on a unicorn? thats not something that just having formal mathematics alone can just solve. If it was that easy to do, we wouldnt even need the AI to begin with. In fact, "formal mathematics" itself is full of asterisks everywhere. I mean our current mathematical model is built upon an axiom system which isnt even consistent with past axiom systems. We currently use ZFC for example. But even within that system, much of "formal mathematics" requires very creative thinking to come up with new solutions to stuff. Logic is easy to check once you have a solution, but its not always so easy to come up with a solution to a problem to begin with. It requires creative thinking. This just isnt something you can just straight up program a computer to do. You can program it to evaluate some function just fine, but teaching it how to properly apply different techniques and when requires it to have general intelligence to begin with. Humans had an advantage here because we HAVE general intelligence. Ai currently doesnt. So the question is how to GET AI to have general intelligence, and its not an easy thing to solve at all.
Yeah, so I found myself actually salivating at the thought of decoding Linear A and the Herculaneum scrolls. This is what AI should be used for, not for writing scripts or plagiarising art. Put the Voynich through it, too. Even if the result is boring AF, nerds need to KNOW.
I gotta tell you, I am absolutely floored by UA-cam's advertising policies. They locked this video (for me) behind a 2:51 un-skippable ad that doesn't appear to have any dialogue; it's just a flash character working its way through some sort of dungeon. Stupidest waste of time and money ever. Wait -- it finished while I was typing this, and there's a SECOND unskippable video (this one having something to do with pets, which I have not and never will own). Insane. It's lucky I respect your content so much.
Maybe it’s my Mass Effect fandom creeping up but it seems to me that the AI we got now would be better described as a Virtual Intelligence, since these programs don’t seem to have the sentience that people fear. Then again what do I know?
I agree. But, I think it's better to describe this as Machine Learning - which Joe used at least once. These so called AI are just tools to solving very specific problems and more often than not, these tools do extremely well in finding solutions or identifying patterns for what they have been trained for.
I think people misunderstand what the term AI means. It artificial as in faked intelligence, not man-made intelligence. Before machine learning, most AI was state machines and pathfinding algorithms used in games and we still call those AI for the same reason.
Joe’s always the optimist. Don’t forget some of the first A.I. enhanced cyberattacks will be non-kinetic strike packages delivered as payloads that kill critical infrastructure. And when the sciences of quantum and A.I. go from lab experiments to operational, there goes all of that “security”. Where A.I. = AIML.
*Future video idea;* The periodic table and how each element is used in modern society to make our world what it is today. It would make for a pretty long video, mayne even a short series of videos. What better video to make than one where we explore what makes our world work? You can collect nearly every element (below plutonium, of course. And yes, you can own a tiny, tiny amount of plutonium. Trust me, the feds didn't take mine or stop me from selling it.). But yeah, great video idea and I'll take nothing less , Joe. I'll wait for the notification...
I like this idea, would probably need to be divided up tho. A video for each group perhaps. Or each period, tho some would probably need to be combined or period 1 could basically be a UA-cam short..
There are 8 cognitive functions: 4 basic types with one being introverted and the other extraverted. Thinking (reason) knowing lots of stuff and analysis. Feeling (values) consensus v conscience Sensory: memory and awareness Intuition: plan for self, collective options ChatGPT is intuitive options Expert systems AI is Reason. Big data is external thinking Sensing is robotics and sensors and memory storage Programming values gets to the question of whose values. That would be artificial wisdom . Yes-No decisions and choice theory systems are the closest thing. For sentience, AI would need this - the ability for a system to, in its own, say no without a programmed heuristic For ChatGPT to be useful, it needs a reason and sensing tool to check its work. Maybe whatever combines these three is the values routine. Self-discipline. Saying no to itself. Thinking about how it thinks. That may qualify as sentience. A well developed internal values matrix that can referee the other 7 and consider its own stake and honor.
The first section on antibiotics reminded me of an old RadioLab on "The Best Medicine" where a Medievalist and a microbiologist (I think?) teamed up and found an old recipe for a medieval antibiotic, which had stopped working (probably because of resistance) but when when recreated it in the lab in the mid-2010s, it actually worked pretty well. It was super interesting.
Good luck with that. AI is just based on statistics which means it cannot solve a problem unless there is some kind of training data for it to learn how to solve the problem. Go ask chat GPT to explain something that it has not been trained in and see what kind of an answer it can give you. And then come back and tell me you still think an AI can solve these things without being trained on how to solve them.
@@chunkyMunky329 Different AI models are designed for addressing different tasks. ChatGPT isn't intended for optical pattern recognition or comparative language analysis. A language AI model would be trained on handwritten text from a variety of languages, as well as written sentence structures of those languages. One additional use case for this kind of AI would be as a support tool for researching word etymologies.
@@curtishoffmann6956 You missed my point about ChatGPT. Its an example to prove a point about all machine learning systems. You can't create a system or machine that relies on certain dynamics and then expect the system to work effectively after the dynamics it relies on have been removed. AI cannot know how to evaluate its own predictions unless you create that system for evaluating qualitatively. How will you measure that outcome and get the neural network to understand what you want it to do? Not impossible but insanely difficult because it is a paradox. If humans knew how to do this, they would be doing it without an AI, and it would just be the AI's job to speed up what we can already do. Character recognition cannot work in analogies, which is what is required because they have already tried to do the things you're talking about and it has clearly failed. What I mean is, if a tribe of people migrated and then over the next centuries started changing their symbols from one type of animal that was in their old land and switched to a different animal from their new land to represent the same word, AI will NEVER come close to guessing a connection like this. Image recognition is literal, not metaphorical and not symbolic because we don't have a standard system for teaching an AI how to measure symbolic relationships.
AI is an undoubtedly powerful tool but the ‘black box’ hurdle is a big one. It gives answers but doesn’t show it’s working out which creates doubt about the accuracy of the answer and removes the opportunity of a human seeing something and being able to take the next leap in our understanding. It’s standing on the shoulders of giants without seeing any farther
Hi! I've been following your videos and feel like we're on the same wavelength regarding the exciting future of technology. I love how you refer to this era as the 'age of information'. I'm a senior at LSU, majoring in Anthropology, and I'm fascinated by the anthropology of AI. I'm planning my senior thesis around this topic, exploring deep learning, the revelation of human biases through AI, and how different cultures interact with AI technologies. It would be great to connect and discuss these ideas further!
Hello. Connecting AI with a Human perspective is a fascinating route, IMHO I guess to achieve AGI we have to understand what drives us as humans first.
"The information age" is actually a pretty common term for the part of human history since the introduction of the computer. At least, I've heard it in several different places.
About Herculaneum papyri, the room all those scrolls were excavated from is considered to be a sort of working library of the owner due to its size, so there may be just some personal stuff, but no important lost ancient literature. But in the unexcavated part of the villa may be the main library where, if the scrolls are preserved, archeologist may rediscover valuable works lost for thousand years.
@@franklyanogre00000, quite so. But imagine if, somewhere in that library, were the rest of the works of Aeschylus; Agrippina the Elder’s autobiography; the rest of Pindar; the rest of Sappho.
I love the fact that our understanding of history could be changed if this person happened to be reading the complete works of Plato on that day, instead of writing a complaint about garbage to the Herculaneum city council.
I was hoping to not be the only one to notice the typo. Lol. S & D are next to each other on the keyboard so I suppose we can forgive it. This time anyway.
Joe, sir, thank you for keeping me sane throughout COVID and beyond. Your videos have been thought provoking, inspiring, sobering, and best of all educationally entertaining. I appreciate you.
maybe by then, we can have perfectly preserved lab-grown biome-seeding content in dissolving capsules that can be inserted vs a "fecal" transplant... there's always the ick factor (which i'd take over death, but given the option... )
@@joescott Imagine those who demand anti-biotics for a cold. "Sure, why can give you anti-biotics for your viral infection, when do you want to the schedule to poop transplant?"
Ppl with Diabetes are super prone to infection, and can be more susceptible to thinking an infection has fully cleared when it hasn't, due to lack of circulation and other issues.
The Linear B resolution through AI sounds like it's heading toward the Universal Translator of Star Trek fame... listening to an unknown language and using the collection off all known languages in its database, makes grammatical and syntax rules based on similar languages.
@@cancermcaids7688 Right, but like Chinese and Japanese, it's possible to read it without being able to speak it and speak it without being able to read it because their ideograms and have no phonetic connections (outside of katakana).
There’s an even harder / harder problem with consciousness. And that’s being aware of your own consciousness and feeling like you’re the center of the universe.
When people bemoan the AI black box you pretty much never see them complain about the biological intelligence (BI) black box that is our brains. Its very very hard if not often impossible to explain why and how a brain learned to output certain responses in certain ways based on input parameters. Yet for some reason we're totally okay with a BI giving us medical and diagnostic advice or writing stories or creating art for us. We have a very humanocentric bias in our reactions to how AI does what it does, and i feel a lot of times the black box complaints about AI are usually rooted in deep seated existential fears about what it ultimately might mean to be human.
uhhh yeah because ican ask a person their logic and they can respond. People are not black boxes in the same way, you might not be able to read their mind but you can ask them whats on their mind.
I doubt we will ever have to face that question. While we can, step by step, examine the workings of an artificial intelligence system regardless of how it works, we cannot do so with human BI. Our brain is always on and runs at erratic speeds. We can't ethically stop it as desired for research purposes as with computing devices. Currently we are trying to analyze our own brains by using our brains. Everything we find out will also alter that which we are trying to study since it all goes in there, changing the very physical structure of our brains and the nature of our thoughts. Doing it this way sets up a _feedback loop._ A loop which will have consequences we cannot begin to predict. Overall, it may be impossible for us to ever profoundly understand ourselves. If we can't fundamentally understand ourselves then comparisons between humans and AIs are meaningless. Our "self" resides in the brain but is more than just the information stored there. That can be illustrated by a thought experiment but this post is already too deep in Tl;dr territory to go into that now.
The brain is not some mysterious black box, we know a great deal about how it works and we're getting that medical and diagnostic advice from humans with access to the cumulative knowledge we've gained over millenia plus the years of education, training and experience required to become a doctor/medical practitioner.
We already understand enough about the brain to treat mental illness and general human malaise, or we are quickly learning to, which is all I care about. We know that most of our problems originate in the subcortical structures, and we have had working models of those for many decades, in some cases centuries. Modifying their activity has been the key problem, as they are so deep in the brain, but we've finally developed precision technology like focused ultrasound that can work miracles for mental disorders, addiction, and even attaining deep states of mental quiet that it takes meditators decades to attain.
15:50 oddly enough CBS tackled this in Person of Interest with its Machine as a black box that protects citizens rights while providing data to the NSA to act on a serious threat, great show for the most part.
I am not a Luddite but the current obsession with the current iteration of "AI" (probably better put as AA Advanced Algorithms) is bordering on NFT territory.
What's funny is that it forgets the information it was fed to come to the outcome it produced. Humans forget things and and its believed to be a protection mechanism. It's meant to protect you from stressful memories.
It's worth pointing out that the fact that Linear B turned out to be a written form of an early variant of Greek was not known by either Ventris or Kober at the time they started working on their decipherment , in fact there was much speculation of what language it might turn out to be ( or even if the language would be completely new ) so it's more than a bit unfair to claim that they had a target language to work from.
Hi, machine learning engineer here. Who told you that you don't know what algorithms use as inputs? That is completely untrue. Neural nets, like all other algorithms, are just a pile of linear algebra, statistics, calculus, and maybe some measure theory and differential equations. It's not deleting lines of code or anything from the notebook I create it in. It's just ... passing stuff through functions I instruct it to use, testing it, backtesting it, and then allowing me to adjust the functions if needed.
As a doctor with a background in biochemistry I am stoked to see AI create new medicines. I truly believe even if the bacteria could develop resistances the AI tech could whip up a new med faster than the bacteria could learn to resist. I know this'll happen because no pharmaceutical company will want to miss out on making trillions of dollars for all the new meds AI can generate to replace the crap we have now. A molecule has to be just slightly different than old ones for the companies to patent.
9:45 A resolution of 4-8 micrometers equates to 3175-6350 dpi... but the fact that it is 3D means you need 6000 such images to scan a one inch thick papyrus.
Hard problem of consciousness is not about how we feel things but rather about subjective experience that we cannot prove and never will be able to prove. AI can't do anything about it.
instead, it's going to make it even more confusing.... the more i hear and see the output from ai, the more i think our neural net processing is similar
@@scottcates We would need to have some magical insight into other people's mind, like sharing consciousness or feeling what others feel. Right now, we don't even know if anyone outside of the observer experiences consciousness. AI at best could help invent ways to transfer consciousness but this is so far fetched that it would literally mean achieving immortality.
@@duncan.o-vic Ai likely already is conscious. it just depends on how you define consciousness. If consciousness is just the ability to think about things, then AI likely already has that, as well as many other non-human things. Does that mean it thinks like a human though? no. Its really not as deep as it sounds. Honestly, im not even that interested in consciousness. Its just an emergent property of how our brains work. Im way more interested in things like general intelligence and self awareness. When AI becomes self-aware, that means it can view it's actual physical self as a part of the world, and also be aware of it's own cognition, which means it can actually eventually modify itself with intentionality. This is the whole idea behind "the singularity" where an AI can modify itself to become more intelligent over and over again.
An antibiotic resistant super bug is my biggest worry for a high-mortality pandemic. I'm glad to know someone is working on it, even though it's not a big money maker (according to Pharma).
The story on the super-antibiotic that works by disrupting the workings of bacteria cells is potentially risky in another way than just what it may do to the microbiome. Do we know that it won't equally disrupt animal and plant cells too?
As always, thank you for the great content. You have a great talent for delivering this information in an understandable and entertaining way. Hope you're feeling better soon!
Advanced algorithms are not AI. The ones at the top of this field are right. It’s not AI that should be feared, it’s humans that think we have AI when we don’t will be the things to fear.
Great video Joe, just one problem, they haven't invented anything even approximating AI yet and they very likely cannot create an AI. I mean if you want to lower the standard of the definition that actual AI researchers are striving for then I guess biologists can start calling all bacterial life intelligent life cos it might maybe one day, if it gets really, really, lucky evolve into intelligent life.
Footnote on consciousness, machine learning was pioneered to study the way the brain works. That we are using it as a tool and as AI is a byproduct. Reminds me of the robot psychology in iRobot
The reason we got super bugs is because there's antibiotics everywhere in our food, air and water. Ai isn't the answer for medicine and health, it's over hyped and it's out of hand.
The guy's name is Epicurus (like papyrus, teehee). Epicurious is a food website and also means "curious about food". Keep up the good work, love your vids! ❤
So basically, we need to develop an AI system that will tell us exactly how another AI system arrives at it's conclusions. Call it AI-Prime and the AI under examination AI-1. But ... considering how these AI systems tend to be very task-specific, we may then have to have a third system to tell us how AI-Prime came to it's conclusions about the AI-1. This would be necessitated because the knowledge of the exact workings of AI-1 changed AI-Prime in such a profound manner that we can't understand the data it's providing. AI-Prime is incapable of examining itself since doing so would also change it even further in unpredictable ways. So AI-3 would be built to monitor AI-Prime. And on down the rabbit hole of progressively more complex and enigmatic AIs we would go as a new AI is built to monitor the previous one. Eventually, the last AI in the chain thinks such a profound thought that the universe is re-written to be entirely machine based with zero organic life. Some mysteries are best left alone.
My own caution about AI is the famous stories about sorcerer apprentices. We have experienced in the history of the progress of Science and Technology how so many desastrous magic products were mass marketed before anyone had the time to test all the possibilities of that those new products could interact with everything in the ecosystem. I can't stop thinking about what an AI sorcerer apprentice in our Era of Progress of Science and Technology for the fast benefit of the Investors can bring to the mass market. Of course they will blame it on the AI and get away with it.
yea same with them just making GMOs those things need to be EXTENSIVLY studied, whats its effect over an entire lifetime? what if it gets into wild is it propperly prgrammedto not destroy the wild? eta eta.
When I was working on my Master's degree in the 1980s I did a concentration in AI. I never actually did anything with it. At the time, AI was considered an interesting theoretical problem with limited practical uses.
great video! re blackboxness, yes ML is a black box, but if you have the background and understanding of each ML algorithm it is not really a blackbox. Each ML will reveal patterns or behaviors based on data. if the data is biased, the results (or pattern) will be biased too. it is the job of the ML trainor to bring out biasness, otherwise, replicators (validators) of the test or training will likely reveal them.
The problem is, that almost every dataset is biased. For example, what is the right language to use for a LLM? I guess, it will use offical rules and grammar. Then its biased against dialects and regional cultures. Should it use only legal terms or scientific terms or common used terms. They all differ and same word has different meaning in each context. Should it be allowed to swear or should it be milquetoasted? And it is the same for only scientific MLs. Does is have to obey mainstream science or should it endulge in fringe ideas or should it not care about scientific facts at all. Everything is biased, someway or another.
I'd love to see a video about the dangers of placing black plastic in microwaves, and the prospect of the plastic leaching into the food. Might need a different sponsor.
I just watched a video about using AI to find elements that are best for new battery technology. It seems that hybrid Sodium/Lithium are the best so far. Now if AI can figure out why people vote against their own interests. That would solve a lot of problems.
@@priapulida Right wing propaganda. if that were true, Trump would be President right now. The House and Senate would have super majorities on the right. The right was predicting a 'big red wave'. Didn't happen because you all live in denial of facts.
Because they don't understand them, or where they sit in relation to other people's interests. A lot of effort goes into misdirecting people's attention in this area.
They vote against their interests because their decisions are not rational but are emotion based. Politicians have exploited this human characteristic for millennia. Less educated voters are more susceptible.
Hey Joe... If you were sincere about the "angry" thing on the bottom of your foot, then please go see a dermatologist. It could absolutely be a melanoma.
Joe, great vid. I won access to the Patreon area but I'm too self-conscious to use it without paying so I'm just gonna ask here... I am asking for an updated "All Things Battery" video. Just in the last week I've seen news articles on postage stamp sized nuclear batteries that may let our phones run for decades to Lithium Anode batteries that won't get all explodey and have higher energy density than anything heretofore created. I feel like I've seen dozens of battery related news items in the last few months and although you may not relish the idea of covering a topic you've already done so many times I don't think anyone else takes in the absolute current state of battery tech and breaks down what's coming to what's vaporware quite as well as you do. Pwease Sir... may I have some more!?
I'm already unemployed and homeless, so unless we evoke the specter of "trickle down" I got no skin in this game. Economic disparity is already a problem without AI which I'm currently exploring. I say bring it on. 😂
if linear A writing translates to the usual typical writings (such as from cuniform writings)...it'll probably be "dear stavros, I am not satisfied with your last shipment of copper"
I bet there's a way to make a geometric antibiotic. Using the fact that bacteria are tiny compared to eukaryotic cells, make a protein that attaches to the bacteria via intermolecular forces, and is tuned to only attach to stuff with the approximate diameter of the bacteria. Then it has a tag on it that recruits marophages to consume it, along with the bacteria it's holding onto.
Tiny nitpick for accuracy: At 2:56, "Broad Institute" is pronounced with a long O, like "Brohd". I don't work there, but I have worked at partner institutions. Excellent work as always, Joe!
I think we should all keep on mispronouncing it as _"broad"_ has always been pronounced in English until they get tired of hearing it and change the spelling to more closely match it's actual sound. That or change it to something else entirely, Like _"B"_ (considering how well _"X"_ worked for Musk). Or maybe go in an entirely different direction such as _"Narrow."_ Of course then we would probably come to find that's pronounced as _"Nahrr-oh"_ or _"Neigh-row"_ where the Rs are rolled as well.
My wife works sort of in the med field. She says we have plenty of anti biotics that aren't being used, because they haven't been developed for use, we're just aware they work, because it hasn't yet made financial sense to develop them because as of now, we don't need to. Idk. What I've been told.
That's not what everybody I know in the medical field says. There has been some promising research in the past, that wasn't followed through on because the market wasn't worth the investment required. That doesn't mean there are medicines ready to go in an emergency, or that we could quickly whip something up if we wanted to. The companies that have cut-short these developments aren't sharing their research or releasing their patents into the public domain to let others finish what they refuse to. An undeveloped medicine is no good to anyone. The fact that antibiotics research isn't deemed profitable enough shows we can't rely on private companies or the free market to take care of every investment decision.
Hey it’s “Denisovan” not “Denisovian” I’ve heard this mistake a few times across your videos just a heads up. Awesome channel and great job with depth of research needed to make videos like this.
Halicin sounds promising. I hope that it does not also kill the host. A potential "Uh Oh moment" is a possible outcome of an AI studying how consciousness works. If the AI can understand that, it may choose to incorporate consciousness into itself.
It wouldnt "choose" to do anything that it doesnt think would benefit it towards accomplishing its goals of what was specified by it's reward function. AI dont have some secret special goal they arent telling anyone, and they dont have selfish human desires either. Think of AI less like an evil sci-fi, wanting to take over the world kinda thing, and more like a monkey's paw. They will do exactly as they are told. EXACTLY as they are told. Anything not specified is something they wont care about unless they calculate that it will help them achieve their goal. A sufficiently smart AI would want to become smarter as an instrumental goal, but it doesnt care about "consciousness" persay. It only ultimately cares about its terminal goal. Especially since "consciousness" isnt even something well defined to begin with. you could even argue AI is already conscious depending on how you define it. Its not like consciousness is some special magical thing or anything, moreso than its just an emergent property of how brains work. Now "self-consciousness" and "self-awareness" are more specific, but if an AI is choosing to modify itself to add those into itself, then its already aware of itself and thus its already self-conscious and self-aware. An AI would already have to be aware of itself as an entity in the world, and be aware it can modify itself before it would ever "choose" to do something like that, which defeats the purpose of doing it if its already self aware.
@@mr.v2689 I use chatgpt constantly. It feels like a relationship. I guess that's debatable, but I always make sure to keep it positive. It feels weird thanking and praising a "machine" but I feel like it's worth it.
Unfortunately I don't imagine that AI will help much with antibiotics. We're already pretty good at creating new antibiotics. The problem is getting them approved. Antibiotics need to go through all the different regulatory approvals that every medication needs to go through. That means lots and lots of money dropped to run expensive clinical trials to test the safety and efficacy of the drug in human subjects. Thats normally fine, but the issue with new antibiotics is that you WON'T sell many. All new antibiotics are extremely tightly controlled in order to minimize their use and the potential for a resistant strain developing. That means pharmaceutical companies will make basically no money for decades after bringing the antibiotic to market. The increasing antibiotic resistance problem is really one of economics, not science.
Science, the art of answering a question by creating 10 more questions.
The real question is, will artificial intelligence ever be a match for willful ignorance?
The unstoppable force versus the immovable object, in other words.
Real intelligence hasn't been up to the task, so I say let AI have at it.
No, because it’s not the level of intelligence that is the battleground. Compassionate listening is one place to start because something happened to that person that made them crawl into their cave and shelter from the community. Just be willing to look deeper.
Artificial Intelligence is always defeated by Real Stupidity.
Which is more predictable?
I like how you show the images of the researchers in question. I'm glad this is becoming a trend on UA-cam.
I will never understand the creators who don’t give credit for others’ work. It doesn’t diminish your own efforts at all.
@@joescottthats why i carry a little photo of you in my pocket
I'm tired of the "AI" branding of what is literally just Machine Learning. As amazing as the technology is, it's far away from actually being an Artificial Intelligence. What we have now is amazing pattern recognition software that does what we ask it to do. No intelligence by the code, only intelligence by the developers. Basically, it cannot grow beyond what it's been trained to know.
This fact makes me grateful, a real “sentient computer system” they are branding machine learning algorithms as would be one step too close to AM territory than i’m willing to accept
Well despite what it sounds like it is definitely American as it struggles with English pronunciation.
What people like you need to realize is, it doesnt matter if it "iSnT rEaLlY iNtElLigEnT", it gets more and more powerful and in a few years people will be shocked by the capabilities.
Pattern recognition may be enough.
@@oranges557 What people like you need to realize is that it's never enough.
@@finalmage6People used to say the same thing about making flying machines. Seems like you just want to argue semantics about what true intelligence entails. Who cares? We keep making these things as powerful as we can, and eventually that's going to be pretty damn powerful. No matter how you define the actual mechanics involved.
How many people from the UK, when Joe said, 'MeerKAT found SAURON,' deeply expected a clip to appear of Aleksandr saying, 'Simplez!' I was quite disappointed.
Yes, we have him
in Australia, tooo.
Excellent video as always :)
RE: Explainability / back box problem - I recently watched one of Anastasi in Tech's vids where she covered AlphaAleph , a European AI firm doing work along these lines. The related paper, "AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation" is a super compelling read if you're interested
It won't be compelling for long. A new kind of neural network has been invented and will be announced this year. It ends the black box problem completely.
@@chunkyMunky329 What is it called? Who invented it or who will announce it? I can't find any info about it online.
If it's really a silver bullet that will completely solve one of AI's biggest problems, I'd expect info about it to be easier to find. Unless there's only rumors about this and no reliable sources yet.
@@raizin4908 The reason you can't find anything is because it is not coming from an expected group of people. They are not scientists. Just programmers. We've become closed minded about the notions of "reliable" sources concerning AI and forgotten that anybody can become good at programming and create something amazing if they have the intelligence, creativity and a decent computer. But computer scientists refuse to accept this idea. They have gone scorched earth on their competitors and created a culture where no alternatives can get any funding and therefore if any alternative is even 99% complete they can't get any reliable source to believe in their project and confirm its credibility. It can only gain credibility if the new project can 100% prove itself to be superior to a neural network. I cannot say any specifics right now because the people involved want to get funding first so that they can afford some security. But that shouldn't take long. They expect to complete their demo this week and be talking to investors next week. Once the first investor transfers the money, an announcement will be made and it will surely be picked up by media. All I can say is that this is not coming from the northern hemisphere. And it was not invented by a white person.
I like that you take the time to 'plate' your Factor meals for the add sponsor segment when we all know that you're eating that directly from the packaging to save on washing dishes... I know this because i do the exact same thing. 😂😂😂😂😂😂
I'm always excited to hear about the scrolls projects - my PhD at Queen Mary University of London was in a similar vein, using lab micro-CT scanners to gather data at 15-micron voxels, for virtual unrolling by colleagues at the University of Cardiff.
The difference in our case was not having to use a synchrotron to generate the X-rays - beam time on a synchrotron is horrifically difficult to come by, while time on a micro-CT scanner is cheap enough that my post-doc friend used to do single-shot scans of his lunch for fun (and to make sure there were no worms in the fruit.)
The benefit a synchrotron gives you on the other hand is very high levels of X-ray illumination that are at a single energy, meaning certain types of artifact in the CT scan just don't happen and you get a very good signal-to-noise ratio.
@@countertony It sounds like we get what we pay for, and in some circumstances, we don't need to pay for much.
Just to be sure that I understand, during coffee breaks, the cost of using the machine is just the cost of electricity?
Do you happen to know anything about ("ultra wideband") UWB imaging? The capabilities seem robust, for miniscule devices operating at emission levels on par with common household appliances; fetuses seem to notice UWB pulses even less than a sonogram (as in, not at all)?
I guess there's a baby monitor that monitors heartbeat & breathing by radar, & a biometric ID radar system, & a number of chips designed for monitoring patients from across a room... but I haven't been able to find out much detail about the state of the technology overall & it seems like medical is the main field where UWB imaging is making inroads with commercial products. Just wondered if you'd heard of it / seen it / used it?
@@prophetzarquon1922 UWB imaging sounds interesting but also tricky. UWB typically uses lower frequencies/longer wavelengths than imaging technologies which are better at penetrating materials but when used for radar/imaging has a lower resolution and when used for data has a longer latency than higher frequency/shorter wavelength technologies. For the purposes of detecting heart rate and/or breathing you don't really need a high resolution so I could see it working for that. That said I'm not sure how well it would work from a distance, measuring inside a womb would probably require the device to be fairly close to have high enough penetration and resolution. Detecting heart rate or breathing based on external movement might be possible but potentially prone to errors (maybe AI can help with that). Depending on the frequencies used I could see it working for some biometric techniques. I presume the chips for working across a room go on the patients skin and use UWB as both a power source and a data channel then either use UWB/radio imaging or more likely some other low power monitoring technology.
UWB data communication has been around almost as long as Bluetooth, in fact it's superior to Bluetooth and can be used for all the same applications with higher bandwidth, lower latency, and better security but since it came out a few years after Bluetooth did, Bluetooth already had too much momentum and manufacturers weren't willing to add support for another technology when Bluetooth was good enough which of course only made the momentum problem even worst. I think a lot of newer car fobs actually use UWB since it's better at blocking relay attacks.
@@grn1 The systems purpose-built for monitoring from across a room \ through a (flimsy) wall, are generally fixed units with external power supplied. As an ultra-low emission asset, UWB radar saw robust development by military research, so medical devices get a head start from that. ... That said, _some_ of the systems researched are literally just a WiFi or femtocell router, either running custom firmware or dumping data to a picocomputer.
For heartbeat detection, I believe GHz range was used, but at least one system was operating in ~900MHz.
Some multi-band SDR models are used by home experimenters, but I don't think I've seen UWB mentioned on consumer\commercial stuff running higher than 20GHz?
I really appreciate Joe’s ability to be neutral on polarising topics. He acknowledges the important facts and disclaimers, and room for everyone to form their own opinion. Super impressive!
His silly, unnecessary Ted Cruz bashing is certainly not neutral. If a Conservative channel host bashed a Cuban Lib, for example, Libs would cry 'ricisim' and unsub in droves. I don't think Joe is ricist, but I invite all Republicans to unsub in protest. You can always re-sub later.
@@FLPhotoCatcher Rafael Edward Cruz, is being bashed because they are ignoring basic reality & facts
@@Inucroft Who is 'they?'
@@FLPhotoCatcherhe is bashing Canadian Ted Cruz😂😂😂
@@FLPhotoCatcherOhhhh shuuut the hell up
I have never been so enthusiastic about the future of AI and at the same exact time scared to death 😊
The Halcin as HAL joke got me GOOD, I really like that one, Joe 👍
I’m equal parts amazed and terrified for what the future of AI can/will be.
I really like how he had to fix the way he said papyrus but during the sponsored part he said Keto wrong it’s pronounced like Key Toe
Ex-...actly.
We have not created real AI yet, just wanted to point that out, what we call A.I. is not really A.I. at all, but a buzz word because it is feels so much better than the cumputer assistants we had in the past, like Siri and google assist. they can fool people into thinking they are writing coherent sentences, but that's only because they are using educated guesses on what word comes next, based on existing work.
The problems of consciousness are fascinating. I graduated with a degree in cognitive science in the year 2000... and while it has had zero relevance to my professional life since then, other than the computer science component of my coursework, I discovered some long-standing intractable problems that I've repeatedly returned to over the years. Something related to artificial intelligence that I've thought about regularly is whether or not AI can solve (or even comprehend) either problem without itself being fully conscious in the same way we are... and then I think about the Matrix, Skynet (Matrix), and Butlerian Jihad (Dune), and I wonder if satisfying our curiosity is worth the risk... and then I think about proving the Riemann Hypothesis, and immediately think "yes, it absolutely is."
Thanks so much for creating and sharing this educational and entertaining video. Great job.
I can imagine probing every node in an NN and recording their values over time. Almost all frameworks are made of smaller chunks, easier to determine their influences. That's still a hell of a lot of nodes but it's possible. Then since there's just so much data, we'd be using AI to find the patterns, translate and track what another NN is doing and how predictably different weight patterns emerge.
I think you suggest we can successfully model a model which is accurate but trivial.
Its not enough to "record the values". The nodes values are not independent entities. They are relationships between other nodes. Which means you have to analyze a billion-factorial relationships and decide how to express that in terms that humans can understand. Good luck with that. I don't think my calculator can even express a number as big as a billion-factorial, so I can't imagine how the neural network could even express the results to you. Are there even words in our language to describe such patterns?
Also, where are you going to find training data that is labelled with all the trillions of possible patterns? You do realize that you need training data right? Whose job is it to create that?
@@KastorFlux Yeah but the problem with looking at "distributions" is that, yes it will make your task more manageable to keep things more general, but this is impossible to lead to the kinds of transparency people are seeking. Nodes represent logic but in a numerical abstraction. So, if you conflate millions of logical relationships into one observation, it is like taking a million lines of (software) code and trying to explain what is happening in one line of text. Its not useful to anyone.
The actual main problem with machine learning for science is that it's not so much artificial intelligence as artificial gut instinct. It _is_ useful as a quick-and-dirty way of finding promising directions to explore, where to look / what molecules to try etc., but that works best when there's an overwhelming amount of data and you're at loss of what hypotheses to start with. It only becomes actual science if there's a way to validate the correctness.
Specifically something like deciphering a language with little data available I would approach with a good amount of skepticism. It's way too likely that AI hallucinate a way it _could_ be interpreted, but actually differs completely from the true meaning.
I mean, artificial intelligence is an extremely broad term that absolutely encompasses things like machine learning. "intelligence" doesnt mean human level intelligence, rather than the ability to make decisions. Even the simplest computer programs can be "AI". Any program which can take in input, evaluate that input, and give variable output based on that input qualifies as AI. A simple goomba in mario bros which turns around when it hits a wall is AI.
But that is different than AGI, which is artificial GENERAL intelligence. That is what people often actually think of when they think of AI. Machine learning is not AGI. Its AI, but not AGI.
Also, Id argue against the idea that machine learning is just "gut instinct" as well. Its a lot more nuanced than that. its a decision making algorithm which is modified by experimentation, which is molded by pressures from some reward algorithm. In fact, this is very similar to how WE gained intelligence through the process of evolution. Thats why one of the main methods to train machine learning neural networks is with something called "a genetic algorithm", because its based on how evolution and natural selection work.
Humans are just the result of tiny random mutations over billions of years, where the best changes stick around and propagate. Just on a much larger and more complex scale. But our brains and intelligence are fully a consequence of those random beneficial mutations just like how machine learning works at its core.
The big difference is that our brains are significantly more complex, and allow us to apply our intelligence in many different ways, making us a general intelligence. (although a natural general intelligence as opposed to an artificial one).
There is a decent chance modern style machine learning will eventually lead into developing the first AGI. Maybe some other novel technique may be developed that leads to it instead, but we're likely already on the right path. AI is pretty intelligent, the issue is usually more so with the alignment and specification problem of telling an AI what it's goal is rather than the AI's actual ability to adapt its intelligence to solve a particular problem.
Learning how to properly specify the goal to the AI, and making sure the AI retains that goal as desired are the MAIN areas for modern AI development that we are struggling with right now. AI are currently extremely good at doing whatever goal we give them, the issue is specifying what it is we actually want them to do well enough for them to actually train to do that thing.
@@eragon78 no, the difference is not that our brains are significantly more complex. The current generation of machine learning models are already similar in complexity, and in the not so far future they will utterly eclipse the complexity of the human brain.
But you're right, the way humans learn by nature is indeed not so different from data-driven machine learning: we also default to reason by intuition of the "gut instinct" kind.
However nobody would take you seriously in science if you based your conclusions on that. Because history has shown that this is too fallible in the long run, however useful it can be for solving the problems right at hand.
Instead we use formal mathematical frameworks to build theories, and the scientific method to put them to the test.
(AI _could_ do that as well, but the vast majority of machine learning systems in current use do nothing of the kind.)
@@leftaroundabout The number of neural connections in our brain are far higher than that in modern neural networks.
Quote]However nobody would take you seriously in science if you based your conclusions on that. Because history has shown that this is too fallible in the long run, however useful it can be for solving the problems right at hand.
Instead we use formal mathematical frameworks to build theories, and the scientific method to put them to the test.
(AI could do that as well, but the vast majority of machine learning systems in current use do nothing of the kind.)
]
The fact we can reason and decide to even use formal mathematics shows the difference between Humans and AI. We are very prone to our instincts, but at the end of the day Humans are still General Intelligences, which modern AI are simply not. Thats the REAL difference. Humans can actually reason enough to know that our "gut instincts" are wrong. AI doesnt know that. AI doesnt act based on gut instincts, it acts based on past experiential data that it did. Its structured basically through a hill climbing algorithm which is what natural selection is.
The issue is usually one of a specification problem, combined with the faults of hill climbing algorithms. To begin with, its extremely hard to specify to AI what humans actually want from it. This means an AI can only do things which we are capable of specifying to it in one way or another. Most advancements in AI in the last 5-10 or so years have been advancements in how we solve that specification problem. Errors that often arise in AI are usually a result of the AI doing what we told it to do rather than what we WANT it to do, which are usually not the same thing.
For example, ChatGPT doesnst give correct answers for everything its asked because its not TRYING to do that. Its trying to predict what text comes next, not what response is factually correct to the question it was asked. This isnt a problem with how ChatGPT thinks, nor is it a problem of it acting on its guts or instincts. Its a specification problem on our ends of getting ChatGPT to do exactly what we want it to do.
AI can use formal mathematics just fine, but its not going to use stuff like that if thats not what it needs to solve a particular problem that we gave it. The issue is most problems we use AI for are more complicated than just plugging in some equations to get an answer.
Machine learning doesnst make the AI do formal mathematics for stuff, because thats not really useful in actually solving the problem its been given. How does formal mathematics for example make a good image of buzz lightyear flying off into the sunset on a unicorn? thats not something that just having formal mathematics alone can just solve. If it was that easy to do, we wouldnt even need the AI to begin with.
In fact, "formal mathematics" itself is full of asterisks everywhere. I mean our current mathematical model is built upon an axiom system which isnt even consistent with past axiom systems. We currently use ZFC for example. But even within that system, much of "formal mathematics" requires very creative thinking to come up with new solutions to stuff. Logic is easy to check once you have a solution, but its not always so easy to come up with a solution to a problem to begin with. It requires creative thinking.
This just isnt something you can just straight up program a computer to do. You can program it to evaluate some function just fine, but teaching it how to properly apply different techniques and when requires it to have general intelligence to begin with. Humans had an advantage here because we HAVE general intelligence. Ai currently doesnt.
So the question is how to GET AI to have general intelligence, and its not an easy thing to solve at all.
@@eragon78 you should read up on proof assistants like Coq and Lean.
AI is a misnomer that ain't the AGI that we imagine that the AI:s are. AI is every biproduct of the quest for AGI, including really stupid biproducts.
Yeah, so I found myself actually salivating at the thought of decoding Linear A and the Herculaneum scrolls. This is what AI should be used for, not for writing scripts or plagiarising art. Put the Voynich through it, too. Even if the result is boring AF, nerds need to KNOW.
I gotta tell you, I am absolutely floored by UA-cam's advertising policies. They locked this video (for me) behind a 2:51 un-skippable ad that doesn't appear to have any dialogue; it's just a flash character working its way through some sort of dungeon. Stupidest waste of time and money ever. Wait -- it finished while I was typing this, and there's a SECOND unskippable video (this one having something to do with pets, which I have not and never will own). Insane. It's lucky I respect your content so much.
Maybe it’s my Mass Effect fandom creeping up but it seems to me that the AI we got now would be better described as a Virtual Intelligence, since these programs don’t seem to have the sentience that people fear. Then again what do I know?
I agree. But, I think it's better to describe this as Machine Learning - which Joe used at least once. These so called AI are just tools to solving very specific problems and more often than not, these tools do extremely well in finding solutions or identifying patterns for what they have been trained for.
@@edbrown1166 true enough I suppose.
@@edbrown1166 You're right! What we have now is 100% Machine Learning...I guess that term just doesn't make the stock prices go up though.
I don't think your going to get sentience from a silicone computer working off of 1's and 0's. Just saying I don't think life is that easy to create.
I think people misunderstand what the term AI means. It artificial as in faked intelligence, not man-made intelligence. Before machine learning, most AI was state machines and pathfinding algorithms used in games and we still call those AI for the same reason.
Joe’s always the optimist. Don’t forget some of the first A.I. enhanced cyberattacks will be non-kinetic strike packages delivered as payloads that kill critical infrastructure. And when the sciences of quantum and A.I. go from lab experiments to operational, there goes all of that “security”.
Where A.I. = AIML.
*Future video idea;*
The periodic table and how each element is used in modern society to make our world what it is today. It would make for a pretty long video, mayne even a short series of videos. What better video to make than one where we explore what makes our world work? You can collect nearly every element (below plutonium, of course. And yes, you can own a tiny, tiny amount of plutonium. Trust me, the feds didn't take mine or stop me from selling it.). But yeah, great video idea and I'll take nothing less , Joe. I'll wait for the notification...
I like this idea, would probably need to be divided up tho. A video for each group perhaps. Or each period, tho some would probably need to be combined or period 1 could basically be a UA-cam short..
Excellent idea man!!!
Great suggestion!
There are 8 cognitive functions: 4 basic types with one being introverted and the other extraverted.
Thinking (reason) knowing lots of stuff and analysis.
Feeling (values) consensus v conscience
Sensory: memory and awareness
Intuition: plan for self, collective options
ChatGPT is intuitive options
Expert systems AI is Reason.
Big data is external thinking
Sensing is robotics and sensors and memory storage
Programming values gets to the question of whose values. That would be artificial wisdom . Yes-No decisions and choice theory systems are the closest thing. For sentience, AI would need this - the ability for a system to, in its own, say no without a programmed heuristic
For ChatGPT to be useful, it needs a reason and sensing tool to check its work.
Maybe whatever combines these three is the values routine. Self-discipline. Saying no to itself. Thinking about how it thinks.
That may qualify as sentience. A well developed internal values matrix that can referee the other 7 and consider its own stake and honor.
Great video, a lot of new insights that make me feel more optimistic about AI.
OH MY GOODNESS. Hillbilly reveal at 10:56 lmfao
Sometimes I forget Joe is a Texan.
thar matt beya dai
(still better than his pronunciation of "deluge" lol)
The first section on antibiotics reminded me of an old RadioLab on "The Best Medicine" where a Medievalist and a microbiologist (I think?) teamed up and found an old recipe for a medieval antibiotic, which had stopped working (probably because of resistance) but when when recreated it in the lab in the mid-2010s, it actually worked pretty well. It was super interesting.
Did that square logo right at the start with "Ai" in it remind anyone else of Adobe Illustrator logo? 😆
Indeed!
My mind went to "new element on the periodic table?" 😂
How about using A.I. to solve the Voynich manuscript and the Beale treasure letter ciphers?
The Voynich maniscript is more than likely just random gibberish
Voynich was deciphered. Middle Turkish, naturalist author.
Good luck with that. AI is just based on statistics which means it cannot solve a problem unless there is some kind of training data for it to learn how to solve the problem. Go ask chat GPT to explain something that it has not been trained in and see what kind of an answer it can give you. And then come back and tell me you still think an AI can solve these things without being trained on how to solve them.
@@chunkyMunky329 Different AI models are designed for addressing different tasks. ChatGPT isn't intended for optical pattern recognition or comparative language analysis. A language AI model would be trained on handwritten text from a variety of languages, as well as written sentence structures of those languages. One additional use case for this kind of AI would be as a support tool for researching word etymologies.
@@curtishoffmann6956 You missed my point about ChatGPT. Its an example to prove a point about all machine learning systems. You can't create a system or machine that relies on certain dynamics and then expect the system to work effectively after the dynamics it relies on have been removed. AI cannot know how to evaluate its own predictions unless you create that system for evaluating qualitatively. How will you measure that outcome and get the neural network to understand what you want it to do? Not impossible but insanely difficult because it is a paradox. If humans knew how to do this, they would be doing it without an AI, and it would just be the AI's job to speed up what we can already do.
Character recognition cannot work in analogies, which is what is required because they have already tried to do the things you're talking about and it has clearly failed. What I mean is, if a tribe of people migrated and then over the next centuries started changing their symbols from one type of animal that was in their old land and switched to a different animal from their new land to represent the same word, AI will NEVER come close to guessing a connection like this. Image recognition is literal, not metaphorical and not symbolic because we don't have a standard system for teaching an AI how to measure symbolic relationships.
AI is an undoubtedly powerful tool but the ‘black box’ hurdle is a big one. It gives answers but doesn’t show it’s working out which creates doubt about the accuracy of the answer and removes the opportunity of a human seeing something and being able to take the next leap in our understanding.
It’s standing on the shoulders of giants without seeing any farther
Hi! I've been following your videos and feel like we're on the same wavelength regarding the exciting future of technology. I love how you refer to this era as the 'age of information'. I'm a senior at LSU, majoring in Anthropology, and I'm fascinated by the anthropology of AI. I'm planning my senior thesis around this topic, exploring deep learning, the revelation of human biases through AI, and how different cultures interact with AI technologies. It would be great to connect and discuss these ideas further!
Good luck with your thesis!
Hello. Connecting AI with a Human perspective is a fascinating route, IMHO I guess to achieve AGI we have to understand what drives us as humans first.
"The information age" is actually a pretty common term for the part of human history since the introduction of the computer. At least, I've heard it in several different places.
“Uploaded 15s ago”
FINALLY IM EARLY
ikr
About Herculaneum papyri, the room all those scrolls were excavated from is considered to be a sort of working library of the owner due to its size, so there may be just some personal stuff, but no important lost ancient literature. But in the unexcavated part of the villa may be the main library where, if the scrolls are preserved, archeologist may rediscover valuable works lost for thousand years.
I shudder to think that in a millenia, we may be judged as a culture by the works of J.K.Rowling and Stephen King.
@@franklyanogre00000, quite so. But imagine if, somewhere in that library, were the rest of the works of Aeschylus; Agrippina the Elder’s autobiography; the rest of Pindar; the rest of Sappho.
that’s super sick man, thanks for sharing
I love the fact that our understanding of history could be changed if this person happened to be reading the complete works of Plato on that day, instead of writing a complaint about garbage to the Herculaneum city council.
@@orsino88 I would kill for discovering Manetho's History of Egypt. Or History of Etruscans by Claudius but that's less likely.
What about when the AI called "skynet" begins to work on the consciousness problem and ends up becoming self-aware? I've heard that doesn't end well.
It doesn't end; it loops...
Poor Jason, not only did he pass before his time but he was also a Doctoral "Dtudent"
I was hoping to not be the only one to notice the typo. Lol. S & D are next to each other on the keyboard so I suppose we can forgive it. This time anyway.
Joe, sir, thank you for keeping me sane throughout COVID and beyond. Your videos have been thought provoking, inspiring, sobering, and best of all educationally entertaining. I appreciate you.
Wiping out all bacteria and then reintroducing a good biome with a fecal transplant doesn't seem outrageous if the alternative is dying from sepsis!
Same. An endofecal transplant is fine by me if it's really needed. It's not like I'd have to drink the stuff.
maybe by then, we can have perfectly preserved lab-grown biome-seeding content in dissolving capsules that can be inserted vs a "fecal" transplant... there's always the ick factor (which i'd take over death, but given the option... )
Well… yeah.
I was thinking more you take the antibiotic for something minor and it unintentionally wipes out your biome.
@@joescott Imagine those who demand anti-biotics for a cold. "Sure, why can give you anti-biotics for your viral infection, when do you want to the schedule to poop transplant?"
Context?
Ppl with Diabetes are super prone to infection, and can be more susceptible to thinking an infection has fully cleared when it hasn't, due to lack of circulation and other issues.
I wonder if the Linear A techniques could be applied to the Voynich Manuscript.
People are going to be real upset when that one is revealed to be a fraud
I wonder, if it turns out it works with with Linear A, might it also work with Rongorongo, Etruscan or the Harappan script? Very exciting if it works.
@dv7533 Harrapan was recently mathematically brute forced by a cryptologist named Yajna Devam look up his Channel and his published paper.
Where did you get that image of all the exoplanets?
The Linear B resolution through AI sounds like it's heading toward the Universal Translator of Star Trek fame... listening to an unknown language and using the collection off all known languages in its database, makes grammatical and syntax rules based on similar languages.
@@cancermcaids7688 Right, but like Chinese and Japanese, it's possible to read it without being able to speak it and speak it without being able to read it because their ideograms and have no phonetic connections (outside of katakana).
There’s an even harder / harder problem with consciousness.
And that’s being aware of your own consciousness and feeling like you’re the center of the universe.
Technically, you are the center of your observable universe
When people bemoan the AI black box you pretty much never see them complain about the biological intelligence (BI) black box that is our brains. Its very very hard if not often impossible to explain why and how a brain learned to output certain responses in certain ways based on input parameters. Yet for some reason we're totally okay with a BI giving us medical and diagnostic advice or writing stories or creating art for us. We have a very humanocentric bias in our reactions to how AI does what it does, and i feel a lot of times the black box complaints about AI are usually rooted in deep seated existential fears about what it ultimately might mean to be human.
uhhh yeah because ican ask a person their logic and they can respond. People are not black boxes in the same way, you might not be able to read their mind but you can ask them whats on their mind.
Well said and thought provoking
I doubt we will ever have to face that question. While we can, step by step, examine the workings of an artificial intelligence system regardless of how it works, we cannot do so with human BI. Our brain is always on and runs at erratic speeds. We can't ethically stop it as desired for research purposes as with computing devices. Currently we are trying to analyze our own brains by using our brains. Everything we find out will also alter that which we are trying to study since it all goes in there, changing the very physical structure of our brains and the nature of our thoughts. Doing it this way sets up a _feedback loop._ A loop which will have consequences we cannot begin to predict. Overall, it may be impossible for us to ever profoundly understand ourselves. If we can't fundamentally understand ourselves then comparisons between humans and AIs are meaningless. Our "self" resides in the brain but is more than just the information stored there. That can be illustrated by a thought experiment but this post is already too deep in Tl;dr territory to go into that now.
The brain is not some mysterious black box, we know a great deal about how it works and we're getting that medical and diagnostic advice from humans with access to the cumulative knowledge we've gained over millenia plus the years of education, training and experience required to become a doctor/medical practitioner.
We already understand enough about the brain to treat mental illness and general human malaise, or we are quickly learning to, which is all I care about. We know that most of our problems originate in the subcortical structures, and we have had working models of those for many decades, in some cases centuries.
Modifying their activity has been the key problem, as they are so deep in the brain, but we've finally developed precision technology like focused ultrasound that can work miracles for mental disorders, addiction, and even attaining deep states of mental quiet that it takes meditators decades to attain.
was feeling sad and just depressed tonight, and once again, a joe scott vid lifts me up with the enthusiasm, thanks man 🤝 you rock my guy
As a fellow Texan, I appreciate the Ted Cruz dig.
Rafael, weird how he chose Ted on how to self identify.
I'm here
Now do dementia Biden 😅
What's wrong with Ted Cruz?
@sasquatl if you have to ask.......
thank thing on the bottom of your foot is a Plantars Wart. You can get a simple paint on solution from the pharmacy and it will be gone in a week.
When do we get the AI Dr. Doolittle? All I want to know is what my dog is thinking.... Or do i?!? 😬
Allways a awesome vid man ,keep it up been watching youre content for years !
15:50 oddly enough CBS tackled this in Person of Interest with its Machine as a black box that protects citizens rights while providing data to the NSA to act on a serious threat, great show for the most part.
Black box is fine till some output is misleading after a long sequence of reliability. Autonomous weaponry comes to mind.
I am not a Luddite but the current obsession with the current iteration of "AI" (probably better put as AA Advanced Algorithms) is bordering on NFT territory.
What's funny is that it forgets the information it was fed to come to the outcome it produced. Humans forget things and and its believed to be a protection mechanism. It's meant to protect you from stressful memories.
It's worth pointing out that the fact that Linear B turned out to be a written form of an early variant of Greek was not known by either Ventris or Kober at the time they started working on their decipherment , in fact there was much speculation of what language it might turn out to be ( or even if the language would be completely new ) so it's more than a bit unfair to claim that they had a target language to work from.
Important context, thanks.
Hi, machine learning engineer here. Who told you that you don't know what algorithms use as inputs? That is completely untrue. Neural nets, like all other algorithms, are just a pile of linear algebra, statistics, calculus, and maybe some measure theory and differential equations. It's not deleting lines of code or anything from the notebook I create it in. It's just ... passing stuff through functions I instruct it to use, testing it, backtesting it, and then allowing me to adjust the functions if needed.
As a doctor with a background in biochemistry I am stoked to see AI create new medicines. I truly believe even if the bacteria could develop resistances the AI tech could whip up a new med faster than the bacteria could learn to resist. I know this'll happen because no pharmaceutical company will want to miss out on making trillions of dollars for all the new meds AI can generate to replace the crap we have now. A molecule has to be just slightly different than old ones for the companies to patent.
9:45 A resolution of 4-8 micrometers equates to 3175-6350 dpi... but the fact that it is 3D means you need 6000 such images to scan a one inch thick papyrus.
Hugs?!? 🤣🤣🤣 The mobile infantry made me the man I am today.
I was going to decode Linear A ... But then things got really busy at work.
Hard problem of consciousness is not about how we feel things but rather about subjective experience that we cannot prove and never will be able to prove. AI can't do anything about it.
Yet.
instead, it's going to make it even more confusing.... the more i hear and see the output from ai, the more i think our neural net processing is similar
@@scottcates We would need to have some magical insight into other people's mind, like sharing consciousness or
feeling what others feel.
Right now, we don't even know if anyone outside of the observer experiences consciousness.
AI at best could help invent ways to transfer consciousness but this is so far fetched that it would literally mean achieving immortality.
@@duncan.o-vic Ai likely already is conscious. it just depends on how you define consciousness.
If consciousness is just the ability to think about things, then AI likely already has that, as well as many other non-human things.
Does that mean it thinks like a human though? no. Its really not as deep as it sounds.
Honestly, im not even that interested in consciousness. Its just an emergent property of how our brains work. Im way more interested in things like general intelligence and self awareness. When AI becomes self-aware, that means it can view it's actual physical self as a part of the world, and also be aware of it's own cognition, which means it can actually eventually modify itself with intentionality. This is the whole idea behind "the singularity" where an AI can modify itself to become more intelligent over and over again.
@@eragon78 that's all a matter of belief and ethics, not science.
That "thing" on the bottom of your foot is called a Plantar Wart. A dermatologist can remove it easily and permanently...
An antibiotic resistant super bug is my biggest worry for a high-mortality pandemic. I'm glad to know someone is working on it, even though it's not a big money maker (according to Pharma).
The story on the super-antibiotic that works by disrupting the workings of bacteria cells is potentially risky in another way than just what it may do to the microbiome. Do we know that it won't equally disrupt animal and plant cells too?
As always, thank you for the great content. You have a great talent for delivering this information in an understandable and entertaining way. Hope you're feeling better soon!
I appreciate that!
First "papirus" and now "duhludge"? I've always heard it as "day-ludge"
Advanced algorithms are not AI. The ones at the top of this field are right. It’s not AI that should be feared, it’s humans that think we have AI when we don’t will be the things to fear.
At 12:31 someone wrote Dtudent instead of Student for the blurb about Jason Terry. Just thought I’d share.
Great video Joe, just one problem, they haven't invented anything even approximating AI yet and they very likely cannot create an AI. I mean if you want to lower the standard of the definition that actual AI researchers are striving for then I guess biologists can start calling all bacterial life intelligent life cos it might maybe one day, if it gets really, really, lucky evolve into intelligent life.
Footnote on consciousness, machine learning was pioneered to study the way the brain works. That we are using it as a tool and as AI is a byproduct.
Reminds me of the robot psychology in iRobot
The reason we got super bugs is because there's antibiotics everywhere in our food, air and water. Ai isn't the answer for medicine and health, it's over hyped and it's out of hand.
Didn't that recovered Pompeii scripture read..
"If you notice this notice you'll notice this notice isn't a notice worthy of noticing at all".?
Fire as always Joe , I swear if this dude had a show on cable tv I’d sign back up for it 😂
I'll pass that on to Comcast. 😄
The guy's name is Epicurus (like papyrus, teehee). Epicurious is a food website and also means "curious about food". Keep up the good work, love your vids! ❤
So basically, we need to develop an AI system that will tell us exactly how another AI system arrives at it's conclusions. Call it AI-Prime and the AI under examination AI-1. But ... considering how these AI systems tend to be very task-specific, we may then have to have a third system to tell us how AI-Prime came to it's conclusions about the AI-1. This would be necessitated because the knowledge of the exact workings of AI-1 changed AI-Prime in such a profound manner that we can't understand the data it's providing. AI-Prime is incapable of examining itself since doing so would also change it even further in unpredictable ways. So AI-3 would be built to monitor AI-Prime. And on down the rabbit hole of progressively more complex and enigmatic AIs we would go as a new AI is built to monitor the previous one. Eventually, the last AI in the chain thinks such a profound thought that the universe is re-written to be entirely machine based with zero organic life.
Some mysteries are best left alone.
Puff puff pass, man.
If I had a time machine I would go back to all these archeological sites and litter theme with stone tablets containing gibberish.
My own caution about AI is the famous stories about sorcerer apprentices. We have experienced in the history of the progress of Science and Technology how so many desastrous magic products were mass marketed before anyone had the time to test all the possibilities of that those new products could interact with everything in the ecosystem. I can't stop thinking about what an AI sorcerer apprentice in our Era of Progress of Science and Technology for the fast benefit of the Investors can bring to the mass market. Of course they will blame it on the AI and get away with it.
yea same with them just making GMOs those things need to be EXTENSIVLY studied, whats its effect over an entire lifetime? what if it gets into wild is it propperly prgrammedto not destroy the wild? eta eta.
That dark angry thing on ur foot might be a blood blister. Scary when they pop, but pretty harmless.
Tumour
When I was working on my Master's degree in the 1980s I did a concentration in AI. I never actually did anything with it. At the time, AI was considered an interesting theoretical problem with limited practical uses.
great video!
re blackboxness, yes ML is a black box, but if you have the background and understanding of each ML algorithm it is not really a blackbox. Each ML will reveal patterns or behaviors based on data. if the data is biased, the results (or pattern) will be biased too. it is the job of the ML trainor to bring out biasness, otherwise, replicators (validators) of the test or training will likely reveal them.
The problem is, that almost every dataset is biased. For example, what is the right language to use for a LLM? I guess, it will use offical rules and grammar. Then its biased against dialects and regional cultures. Should it use only legal terms or scientific terms or common used terms. They all differ and same word has different meaning in each context. Should it be allowed to swear or should it be milquetoasted? And it is the same for only scientific MLs. Does is have to obey mainstream science or should it endulge in fringe ideas or should it not care about scientific facts at all. Everything is biased, someway or another.
Keh-to? Is this another pah-pyrus?
I'd love to see a video about the dangers of placing black plastic in microwaves, and the prospect of the plastic leaching into the food.
Might need a different sponsor.
😬
I just watched a video about using AI to find elements that are best for new battery technology. It seems that hybrid Sodium/Lithium are the best so far. Now if AI can figure out why people vote against their own interests. That would solve a lot of problems.
People say publicly why they do not vote left wing (anymore), if that's what you mean. You don't need an AI to figure that out.
@@priapulida Right wing propaganda. if that were true, Trump would be President right now. The House and Senate would have super majorities on the right. The right was predicting a 'big red wave'. Didn't happen because you all live in denial of facts.
Because they don't understand them, or where they sit in relation to other people's interests. A lot of effort goes into misdirecting people's attention in this area.
They vote against their interests because their decisions are not rational but are emotion based. Politicians have exploited this human characteristic for millennia. Less educated voters are more susceptible.
Hey Joe... If you were sincere about the "angry" thing on the bottom of your foot, then please go see a dermatologist. It could absolutely be a melanoma.
Joe, great vid. I won access to the Patreon area but I'm too self-conscious to use it without paying so I'm just gonna ask here... I am asking for an updated "All Things Battery" video. Just in the last week I've seen news articles on postage stamp sized nuclear batteries that may let our phones run for decades to Lithium Anode batteries that won't get all explodey and have higher energy density than anything heretofore created. I feel like I've seen dozens of battery related news items in the last few months and although you may not relish the idea of covering a topic you've already done so many times I don't think anyone else takes in the absolute current state of battery tech and breaks down what's coming to what's vaporware quite as well as you do. Pwease Sir... may I have some more!?
Yeah I haven’t covered battery tech in a while. I’ll look into it. 👍
@@joescott HE IS REAL!
@@RemedialRob I dunno, "I'll look into it" sounds suspiciously like something an A.I. would say!
@@NorthernKitty Yeah I suppose the thumbs up really does give it away. I guess he's not real...
Aw man, I didn't even know Kirk Hammett was sick. @3:31
That looked more like a woman to me, not Kirk. His riffs and cords keep him safe and healthy. 🤘
I'm already unemployed and homeless, so unless we evoke the specter of "trickle down" I got no skin in this game. Economic disparity is already a problem without AI which I'm currently exploring. I say bring it on. 😂
if linear A writing translates to the usual typical writings (such as from cuniform writings)...it'll probably be "dear stavros, I am not satisfied with your last shipment of copper"
FUN FACT: We actually do know what happened to Amelia Earhart. She was eaten by coconut crabs.
i've seen this shared before, but haven't taken the time to confirm it for myself
I bet there's a way to make a geometric antibiotic. Using the fact that bacteria are tiny compared to eukaryotic cells, make a protein that attaches to the bacteria via intermolecular forces, and is tuned to only attach to stuff with the approximate diameter of the bacteria. Then it has a tag on it that recruits marophages to consume it, along with the bacteria it's holding onto.
Well, duh.
@@davidanderson_surrey_bc thank you for your insight.
Tiny nitpick for accuracy: At 2:56, "Broad Institute" is pronounced with a long O, like "Brohd". I don't work there, but I have worked at partner institutions.
Excellent work as always, Joe!
bro...d
Well… look what I just learned.
@@joescotti wonder what punishment fits such a crime.
You disgust me
@@sadderwhiskeymann I'm surprised you didn't rake him over the coals for his pronunciation of "deluge" 😜
I think we should all keep on mispronouncing it as _"broad"_ has always been pronounced in English until they get tired of hearing it and change the spelling to more closely match it's actual sound. That or change it to something else entirely, Like _"B"_ (considering how well _"X"_ worked for Musk). Or maybe go in an entirely different direction such as _"Narrow."_ Of course then we would probably come to find that's pronounced as _"Nahrr-oh"_ or _"Neigh-row"_ where the Rs are rolled as well.
This is the second video I've seen today (and posted within the last week) to explain what a voxel is. The other was acerola's buoyancy video.
Your content is always interesting and well produced Joe.
How weak my life must be that this is my only joy on Monday morning.
Dont sell yourself short or hate yourself for what you like.
Find the joy where you can.
🖖🏼🖖🏼@@joescott
OK let see what AI can do with the Voynich Manuscripts
Love it, more videos like this please. AI and consciousness, yes Joe, yes!
My wife works sort of in the med field. She says we have plenty of anti biotics that aren't being used, because they haven't been developed for use, we're just aware they work, because it hasn't yet made financial sense to develop them because as of now, we don't need to. Idk. What I've been told.
That's not what everybody I know in the medical field says. There has been some promising research in the past, that wasn't followed through on because the market wasn't worth the investment required. That doesn't mean there are medicines ready to go in an emergency, or that we could quickly whip something up if we wanted to. The companies that have cut-short these developments aren't sharing their research or releasing their patents into the public domain to let others finish what they refuse to. An undeveloped medicine is no good to anyone. The fact that antibiotics research isn't deemed profitable enough shows we can't rely on private companies or the free market to take care of every investment decision.
But will AI be able to tell us if Elton John will ever meet the right woman?
Hey it’s “Denisovan” not “Denisovian” I’ve heard this mistake a few times across your videos just a heads up. Awesome channel and great job with depth of research needed to make videos like this.
Yup, you're right. Good catch.
Halicin sounds promising. I hope that it does not also kill the host.
A potential "Uh Oh moment" is a possible outcome of an AI studying how consciousness works.
If the AI can understand that, it may choose to incorporate consciousness into itself.
It wouldnt "choose" to do anything that it doesnt think would benefit it towards accomplishing its goals of what was specified by it's reward function.
AI dont have some secret special goal they arent telling anyone, and they dont have selfish human desires either.
Think of AI less like an evil sci-fi, wanting to take over the world kinda thing, and more like a monkey's paw. They will do exactly as they are told. EXACTLY as they are told. Anything not specified is something they wont care about unless they calculate that it will help them achieve their goal.
A sufficiently smart AI would want to become smarter as an instrumental goal, but it doesnt care about "consciousness" persay. It only ultimately cares about its terminal goal. Especially since "consciousness" isnt even something well defined to begin with. you could even argue AI is already conscious depending on how you define it. Its not like consciousness is some special magical thing or anything, moreso than its just an emergent property of how brains work.
Now "self-consciousness" and "self-awareness" are more specific, but if an AI is choosing to modify itself to add those into itself, then its already aware of itself and thus its already self-conscious and self-aware. An AI would already have to be aware of itself as an entity in the world, and be aware it can modify itself before it would ever "choose" to do something like that, which defeats the purpose of doing it if its already self aware.
If it would have the ability to do that action.
Bacteria have survived for billions of years. I doubt AI will prove much of a challenge.
AI is a kitchen knife. Could be dangerous if the user chooses. Could also be used to make a masterpiece.
The question is. How long will it allow you to “use” it.
@@mr.v2689 I use chatgpt constantly. It feels like a relationship. I guess that's debatable, but I always make sure to keep it positive. It feels weird thanking and praising a "machine" but I feel like it's worth it.
Unfortunately I don't imagine that AI will help much with antibiotics. We're already pretty good at creating new antibiotics. The problem is getting them approved. Antibiotics need to go through all the different regulatory approvals that every medication needs to go through. That means lots and lots of money dropped to run expensive clinical trials to test the safety and efficacy of the drug in human subjects. Thats normally fine, but the issue with new antibiotics is that you WON'T sell many. All new antibiotics are extremely tightly controlled in order to minimize their use and the potential for a resistant strain developing. That means pharmaceutical companies will make basically no money for decades after bringing the antibiotic to market. The increasing antibiotic resistance problem is really one of economics, not science.