Artificial Einstein: Did AI just do the impossible?
Вставка
- Опубліковано 28 тра 2024
- Join my mailing list briankeating.com/list to win a real 4 billion year old meteorite! All .edu emails in the USA 🇺🇸 will WIN!
Artificial intelligence has already proven its ability to produce entertaining and sometimes surprising creations, from texts to images and even videos. But can it learn physics? Maybe even discover new laws of physics? Today, we will venture into the fascinating intersection of artificial intelligence and physics. Computational Fluid dynamics, machine learning and even computer game design are encountered.
Key Takeaways:
00:00:00 Intro
00:01:04 The role of AI in quantum computing
00:03:31 Can AI predict outcomes better than humans?
00:07:09 A new way of simulating fluid dynamics
00:18:33 Outro
Additional resources:
➡️ Follow me on your fav platforms:
✖️ Twitter: / drbriankeating
🔔 UA-cam: ua-cam.com/users/DrBrianKeatin...
📝 Join my mailing list: briankeating.com/list
✍️ Check out my blog: briankeating.com/cosmic-musings/
🎙️ Follow my podcast: briankeating.com/podcast
Into the Impossible with Brian Keating is a podcast dedicated to all those who want to explore the universe within and beyond the known.
Make sure to subscribe so you never miss an episode!
#intotheimpossible #briankeating #chatgpt #AI - Наука та технологія
Will AI ever discover a new theory of nature? Let me know and don’t forget you can win a real meteorite 💥 when you join my free mailing list here 👉 briankeating.com/list ✉️
Discovery of that which already exists is where it has the best chance of doing something useful. It's a natural search engine. Composition is where it will always struggle, though, imo.
Joined and Subscribed. Umm... Probably. The stuff I've put into Google's A.I.? I'm gonna need FBI witness protection. 🤣 loljk! Love Your Show Mr/Dr Keating!!! Stay Free!
If AI ever does I bet there is a smart taxi driver, or some other Hi IQ non physicists, who will provide a proof that we could have have had this “discovery” 20-50 years ago, if it wasn’t for the *control apparatus* in place in the physics community, that prevents outsiders from presenting truths.
Ah the Hubris of you all.
Circular Quadratic Algebra is coming after general relativity 🙂
(I made my discoveries when I started making a simulation game where I decided to simulate everything from players, npcs, and trees)
Hi, probably yes, big advantages, bigger risk, we need some caution whit AI, but the advantages are immense......all the best.
I think AI has the greatest value to analyze huge data sets to find unknown relationships which lead to new physics equations, new chemical compounds, and predictive genetics.
But I am probably wrong.
No, you're not wrong -- that's exactly where AI excels.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
AI excels only as a learning aid for humans, automated processing, and testing. So if you have a problem it can make mistakes 95% if the time, but it can be programed to eventually narrow to a working model completely automated that is the value. Which is why in the new chip set displays they showed AI training programs for training labor robots. If they actually works large scale we will not know for years. It will almost certainly be pushed into market flawed like nearly every program. Sadly, bugs in these programs have very high real world costs.
You nailed it. It's good
at analyzing for patterns in "dimensionality" which personally I think is better described as adjectives for our language. Every dimension is just a part of the description and it's just a measure of how orange is something relatively (orange just being a randomly chosen example) and how much does that matter to interpreting it. A jacket the orange value doesn't matter as much for understanding but for other things it matters immensely.
How do you fact check it? AI can't reproduce reliable biographies of living persons without making shit up and you expect it to do advanced physics based on trust?
Protein folding is probably one of the most important areas of exploration because of its implications in the restoral of health, and repair of injury.
If going further you get bio-engineering. If this is the start of an AI revolution what could it enable and be the next, think it could be bio-engineering.
It's to messy and complex for the Human mind to handle so I think some AI is needed.
I hope it can help my torn radiator cuffs. Both are messed up Big Time and the doctors don't know squat. But the robot. He'll get me back in the batters cage swatting balls in no time flat.
@@lubricustheslippery5028yeah the financial elites can engineer people exactly how they want!
@@quantumpotential7639topical turmeric oil lipsomal with piperine.
Oral capsule turmeric with piperine + boswellia
That's why AlphaFold is such a magical tool for the scientists working in biomedicine.
I am working on quantum physics LLM. What I find the hardest is the reinforcement part as the initial training data pretty much is standardised. That said I want to try to reinforce it with literature related to a more funkier side of possibilities such as anti-gravity or maybe time travel.
We have already been able to accelerate particles in a controlled setting to within a fraction of a fraction of the speed of light. For a next step towards time travel, what if we build a particle accelerator whose inside :track" is big enough in diameter to accelerate a 12oz can of Coca-Cola to that speed? Huh? Huh? What new laws of physics would the cola be exhibiting right after you first slow it back down and pop it open? What would it even taste like at that point? (I don't land on "can of soda" randomly. They are the optimized shape and the mass seems like it would be both ambitious but doable.) ♠
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Yes, feed the robot dessert FIRST so it gets really excited and a sugar boost with vivid theoretical fancy fantasies so that once it sits down for the meat and potatoes main course, with a salad of course, it can get busy producing amazeable out of this world stuff we never imagined before, but it now has. Fancy Fantasies and Ai are like the 2 pillars of Solomens Temple. Now let us pray for divine guidance as this is powerful stuff and we gotta be careful here going forward. 🙏
really?
Imo LLM will always have trouble differntiating facts and fiction because its all just text and it has no experience outside of that.
Nice work.
I have to point out that with any emerging technology there are problems that occur and may not be able to overcome in the near future:
AI’s:
1. Can hallucinate.
2. Can’t articulate the reasoning or process it used to get their results.
3. Can be biased.
4. Can provide different answers to the same question.
5. Can provide responses repetitive to a theme rather than providing independent responses.
6. Someone will always have to validate AI results. There is an inherent risk to not knowing how a solution was arrived at.
7. It’s lack of predictably is it’s strength but also it’s most critical weakness.
In certain areas, it will revolutionize the world, in most other areas - not so much.
The issue here is much more fundamental, Ai can NEVER articulate its reasoning because its not actually intelligence. Its just weighted averages putting words in an order that makes the most sense. Not that the article is true and correct, just that the flow of its writing is legible.
Because it's just a fancy spin of the auto-suggestion feature we've had on mobile keyboards for years now. Based on that system, you're never going to get sentience and, in this sense, intelligence
AI has demonstrated a capacity to hallucinate, which I consider that a promising start.
Watch the bit about 7.8 billion Nobel prize winners on a trolley rail.
"Gpt chat solves the trolley problem"
If there is a BLM candidate on the rail, they win all outcomes.
Mathematicians come up with new things all the time, it doesn't mean it reflects reality. You can create an elegant answer to some problem in physics that probably explains the issue real well mathematically... it doesn't mean it exists. II means you are good at math.
"The greatest shortcoming of the human race is our inability to understand the exponential function." - Prof. Al Bartlett
I like to think that at some point AI would invent it's own language for calculations due to a sheer time and resource economy. If it would be asked to decypher one of these symbols of it's language into our regular math, it would take years just to read a result.
let it learn all the differences between bosons and fermions on all known levels, including all the known equations like schroedinger.
then make it amplify the differences and have it construct a new system of the particle world.
As a physicist as well, I can tell you we are very short of training data on almost everything. Experimental physics is expensive, the physicists that do this are rare as hens teeth, so if we think we are going to solve anything we need to observe it first and put in a form to train, amongst what is not that thing its NOT as well, and all its nuances. Im excited, but hype is hype for AI
So maybe use it to create new experiments then?
I'm not entirely sure about that. As an outsider (to physics) with more experience on the AI side of things, it seems to me that there's a strong disconnect between "macrophysics" or "applied physics", as in things we typically observe and deal with on a regular basis (my car accelerates via kinetic energy, experiences friction, and so on), versus "microphysics" or physics such as they occur at atomic and sub-atomic levels.
I'm pretty sure that data for macrophysics is cheap, plentiful, and readily available in a large variety of categories, and is fairly well understood such that many phenomena can be simulated as needed.
So the really interesting question is: Given a large amount of semi-relevant data, can you augment a model trained on a small quantity of highly relevant data?
The answer is generally yes, but it will take a careful approach. As a very crude solution, training a Transformer on anything before your target information will generally improve its performance on your target information for whatever reason. With that said, I think there are probably things we can infer about unknown behavior in physics, particularly at a micro level, from patterns in macrophysics that humans aren't necessarily well equipped to find, and I would suggest that it might be wise to hesitate to judge the effectiveness of AI in this area, particularly as we switch from data prediction driven AI (current generation) to more computationally dense "simulation driven" AI (next generation, which we're already starting to see with Quiet*, or agentic workflows, and so on), which function more like human brains and how we think, in a much more data efficient manner than we've seen before.
That said, I don't think that we're going to see in the next year "AI uncovers the final unknown physical laws, and as it turns out, entropy was just a suggestion, and 42 was behind it all along", but I do think we're going to see more "unknown unknowns" in terms of the acceleration of progress in a variety of fields due to the increased efficiency of research as a function of advancing artificial intelligence.
Absolutely. AI is just multidimensional curve fitting in the end so garbage in just produces garbage out.
Whats amazing and has already been duplicated in a way with sora is I dont think you need code for every type of physics simulation. I have a physics simulator in my head right now and couldnt begin to tell you exactly how it works. I can imagine a red apple, I can change its color to blue, I can throw the apple against a wall and watch it bounce off or explode. And all I did was have certain hardware and a knowledge base of what ive observed over my lifetime. Sora is the same way. It doesnt have a rally racing game engine built in yet it can create and simulate what will happen to a shockingly accurate degree, just like my brain. Some physics wont have to be coded in, we can simply train it against the physical world.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
Hasn’t the AI simply found more efficient linearisations by exhaustive exploration?
You’re correct to point out that the solutions might/probably won’t generalize beyond the training data - so might not be as useful where high precision is required, but terrific for making special FX for movies and education where time is money and the audience can’t tell the difference, and nobody’s life or job is on the line.
The mouse pointer icon in the thumbnail is a great way to gatekeep people who smoke too much weed. Like me. For like a minute straight lol.
Something to consider, perhaps is that mathematical equations are describing an idealized model universe. Since AI is using real world models for its simulation, it could eventually be more accurate as a description of the universe than an equation.
What a time to be alive
IIRC, there was a neural network that, after observing/processing videos of pendulums' motion, derived the basic laws of motion; i.e. F=MA I too hope that there will be similar discoveries made by neural networks for other branches of physics.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
That is pretty simple. Anything worthwhile is orders of magnitude harder.
Predicting a simulation result is great. This is similar to what humans can do when predicting everyday outcomes of some mechanical event (some item falling for example) and react to it before the event happened.
AI might do these things you dream of but it will not look like GPT/LLM. GPT is just a large complex search engine based on existing data (lots of it). That's not to say it can't do much as we can see results that are just amazing already. But real AI must be based on another method and we don't know what intelligence is so we can tell what method is needed. Like consciousness, we cant make it cause we cant define it.
I just asked Gemini and chatgpt to define intelligence (and consciousness). They give descriptions of some things they can do but not a definition. Who da guessed?
I agree with you Gary. I assumed that the presenter is aware of the difference between machine learning and actual AI but that he is obliged to say "AI" because that's the term that commercial developers use and they provide funding for projects.
Please AI should be “solutions based on large amount of data”. No thinking is taking place
Can you elaborate a bit more on the graph at 11:40? Is there a relationship between graphs u, c, v, and p, or were you just showing how there are multiple graphs and elaborating on a few select moments of the c(t,x,y) graph?
I ask because it also seems applicable to financial domains
The only problem with this simulation approach is that whoever is involved in running the simulations may, for whatever reasons, risk ignoring the underlying causal mechanisms that these graphical methods reveal particularly if they are simply interested in practical applications. We may end up creating a lot of "technology" that no one understands except for the AIs. Are you sure that is a good idea? Or are you just running simulations for physical behaviors for which the physical laws are already known?
AI formulating a law of physics clearly violate the Chinese room experiment
Solve > Riemann hypothesis or P versus NP problem, win $1,000,000 via the Clay Foundation. 2 of 7 considerable problems, considered primarily "unsolvable".
AI will not do it. The human mind might.
I have been using Ghat GPT4 for understanding Astrophysics and it’s 95 % on solving problems. Sometime it over estimates on things like a white dwarf stars mass before Supernova. Slightly over stepping Ch. limit of 1.4 solar masses But more or less it’s on point.
Well thatt is not surprising because there is plenty of training data and underlying theory. AI is just a fancy way of pattern matching and has limitations.
Damn. Poor Mr. Keating. He's just trying to do what he loves. With all that ** drama going down at UCLA. Right now.
Why not make a text based description language for physics? Like we have in electronics. For its symbolic elements like Feynman diagrams for example. Work from there to the top level. Its a lifetime project for a person but very easy using AI. Then LLM's could code in this physics system language.
AI mastered chess and go so it should have no problems mastering physics and understanding it better than any human being. In chess it is now much stronger than the best human chess player and these AI's taught themselves the game through trial and error in the same way we do. Its not cracked chess yet because of the sheer number of possible games, if quantum computers and AI work together then its possible that at some time in the future they would be able to say we know every possible game and show us what the best possible moves are which in theory should lead to a draw. I see something similar happening in physics, biology etc.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
Hi Brian -can you please slow down a bit and talk more slowly. This is particularly helpful when one is discussing complex topics and will help the viewers.
I adjust the playback speed for just this reason. It works perfectly for speeding up Chomsky to how he spoke 30 years ago.
Been watching physics maths and cosmology on UA-cam for a decade.
So glad to find your channel at last! Top quality!!
I hammered on chatgpt and it seems to lack a spark, human is still needed, so i got it writing and executing code, and was able to watch it and push it in a direction for a solution, its best role is a helper for thinking and testing ideas, it can write code you describe so quickly but you do need to understand what it's doing to get the maximum out of it. Tldr if you cant code that will be an issue in your interactions with current AI
By far, what you are tapping into a potential is the key for advancements in all fields. I have often thought of the impact on medical research, much like what you are presenting here for fluid dynamics. Excellent presentation and accolades for creating a teaching assistant. I wish I would have had such a thing getting my engineering degree in the 80s, LOL.
The Theory of Mother Nature has already been solved, it doesn't require AI, just a simple/simplex nature of Mother nature. "A Theory of Natural Philosophy", written by Boscovich. This is the book Nikola Tesla had in his lap when one of his famous pictures was taken. Light is not an emission, it's a post attribute. Wave of what is the question..... a medium exists. Disrupting a medium is measured as a rate of induction, not a speed or velocity, again, no emission. Speed of light = 0. Mother Nature doesn't have a calculator.
to test if AI is up to the task, give it (only) the data the epicyclists used, and their geocentric model.
If the thing then produces the copernican paradigm correction - it may be able to fix the paradigm which turns the quantum and the relative to woo ;)
Cornell is hilarious. Saying that black people didn’t have their own version of the KKK is nuts. I guess hes never heard of the black panthers :P
My dad was in the national guard in the 60s/70s and was literally being shot at by Panthers.
Crazy .
I hope AI will solve the headless chicken equation. The one that figures out how humans live together in freedom, love and prosperity without any wars.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
Thank you very much for this great video! There are really great insights ahead of us
us.
The models shown are super impressive. What all could be optimized with them!
AI offers a lot of possibilities. Thank you very much for your work and best wishes for many success!
Our pleasure!
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
If they discover a geometry to this science with Googles Alpha Geometry then....
Good! This will eliminate also the research bias and science politics of human scientists that slows down modern science evolution.
No.
Yes ! AI will teach us a great deal about being sincere, in other words, about not contradicting ourselves for ideological reasons.
c = (cosmological natural length/cosmological natural times) is no blunder Doc Keating. It's Einsteinian physiks.
I never suspected he might be an AI but… 🤔
Bound to catch us all out at some stage. Videos at first then perhaps robots if humanity lasts that long (AI might extinguish us)
There is a glitch at 3:56. :( It seems to be a very short duration, but there is still some audio that is lost.
It is my understanding that a neural network is mathematically equivalent to a form fitting function. For sure with linear equations.
Is there a way to derive an equation from a trained neural network, the reverse of training a network to fit an equation?
And if so, does using such a process on any of the networks you just described come up with something similar to Navia Stokes equation?
AI shapes our world now?
Everything is compressible. The name of this equation is misleading, and thinking of these liquids in this way will hold us back.
will AI create new physics by merely simulating it? (simulation theory)
It is clearly already useful for CGI, but is it feasible to actually prove convergence? In other words, to give error bounds? Beyond saying the graphics look amazingly similar....Mathematically that is the natural question!
For a closed loop laser between Cathode and anode energy discharge can it identify the material and model of a perfect chamber for laminar flow of NF3 mixed with CO2 without boundary layers or fractional turbulent eddies
Unlikely.
6:15 the answer to Einstein's question about whether or not an observer would experirence a gravitational field in free fall is no, and that led to the Einstein equivalence principle?
I wonder how Einstein came up with “No experiment can be performed that could distinguish between a uniform gravitational field and an equivalent uniform acceleration.” by asking himself "will a falling person experience gravity?" and deciding "no".
is this software that you can download (fluids simulation?)
It Is funny how realities generater of physical matter can now have a simulation of the great simulator lol
Running elements through various critical extreme states & different lattus structures is definitely something interesting.
To even streamline cost in what's worthy of actual testing is a huge benefit in the search of exotic materials
Even as a tool for we the people to build agents to simulate industry and markets efficiency and functionality will be a great aid in how we decide to build out future infrastructure to accommodate.
It's a lot of tough decisions to be made and we need better tools before we tackle a lot obstacles
I hope I live long enough for AI to do the most amazing things like cure cancer, find the connection between gravity and the other forces, etc. I also hope I don’t live long enough to experience the Terminator takeover of AI. 😎
All those things you talk about can be done by humans with enough time and technology. No AI is necessary.
@@raul36 if AI is created by humans to do it, then I guess humans are just using tools to do it faster.
Backwards discretization of time series partial differential equations...?
It's too hard - I play bass guitar now...
13:42 Speaking of "Chimpanzees" here's a Baboon for your educational gratification.
I predict at the minimum the internet will go out and a new one will have to be built because it needs to be many trillions of times more secure.
Rewrite the Internet in Rust? 😊
Brave of you to enter these waters. The answer to whether AI can discover new laws of physics is easily answered. Use (say) only the knowledge of physics available at any point in our history and see if AI can discover later laws. Eg. only laws of physics and experimental results before Newton, or Maxwell, or Einstein, or Bohr - at each stage can the AI push the limits of knowledge to the next level. A good PhD. topic - cross disciplinary. Maybe something I might try .. 😊
Even a model that could output the smallest of novel processing/'ideas' would be a game changer itself, even if its something we already have proof/definitions/laws of or know. It would have to be a scenario where it was trained only with all the tools necessary to derive the answer/correct output, but not the answer itself.
Unfortuantely were not even at the doorstep yet afaik. Your video is quite optimistic 😁
Maybe AI can show cosmologists and climatologists how misdirected they have been in their assumptions.
AI is, as we have already seen, as biased as the people who input the training material
Wrong. Assumptions are the bare minimum and AI has to start from the same assumptions. It is not magic as you seem to think.
@@rogerphelps9939 I think you miss my point. It’s sarcasm.
@@drscott1 It was a bit too subtle for me but thank you.
Data exponentially increases, AI vacuums it up. Self-perpetuating engine for scientific discovery. What we are witnessing is the creation of vastly more intelligent entities. Scary for sure, but awesome to witness.
Impressive we are learning along with AI
You're a pleb, and you were clickbaited
well you'd hope so… and AI isn't sentient so not sure if it really learns other than Machine Learning… does it know the real world as opposed to simulation, can it tell the difference on the level of consciouses (I'd suggest not)?
Ironic, then, that the average human is getting progressively dumber
Politically correct is a scary term more than any context in which AI could be discussed.
Every time you say "AI", just replace the word with "algorithm". Every "AI" is different, but they are all computer algorithms.
You won't say your calculator is powered by AI, when technically it is, so why say that "AI optimises quantum circuits" ... Phrasing it that way is appealing to some mystical "intelligence", when really you're talking about a bespoke algorithm for solving that specific problem.
A large percentage of the last generation of 20th century physicists wasted and all they got from it were some interesting math, ofc im talking about string theory, with some of the next generation buying into this field studying it now will most likely have wasted so much time. When ai helps prove out correct theorys in 10 to 15 years once ai is developed enough to assist in moddeling theorys on the fly in mere hours or days rather than humans getting a hunch and spending 50+ years and a thousand+ students phd's on it before its proven wrong (like how string theory is looking in my opinion). P.S. I may be biased LOL.
As long as there is Quantum computing there will always be a maybe, an undetermined output, not perfect calculation.
Just one simple question is required for the turing test to determine AI or human
Correct AI incorrect Human.
And if it's actually incorrect and AI, then it simultaneously defeats the sole purpose for wanting AI, because it would have to lie to fool you into the falsity of being human.
HAL 9000 comes to mindd.
Give ai the ability to try to solve grand unification.
Give it the ability to explore quantum gravity.
It will gert nowhere.
@rogerphelps9939 why such a "heavy" heart?
The Chancellor's distinguished Professor. Humble he isn't.
@oididdidi Let's not insult, when we don't understand what someone said.
He is not boasting. He described the name of the "chair" upon which he sits--the role behind his academic position.
Gurus are usually no narcissist
Brian Keating is a great science communicator but like many intelligent laymen he sees too much in what is just a statistical pattern recognition device or system. For practitioners it is slightly hillarious, watching all that starry-eyed buoyant expectation of Great Things To Come but also ridiculously overboard. haha
I don't know about you, but I am in the camp that it will eventually be an A.I that comes up with the ever-elusive "theory of everything."
Typically the nonlinear elements are ignored even for numeric methods because the complexity that they introduce often lends itself to numeric instability. Did the AI solve the NS equations with the nonlinear elements? How did you create the data set if most methods can't solve the nonlinear NS equations? Solutions to the Navier-Stokes equations typically also assume the so called "no slip" boundary. Was this BC also enforced? How did you prove that the solutions proved by the neural network are in fact solutions to the NS equations? NS requires a number of assumptions that typically doesn't apply to non-Newtonian fluids. Did you apply the NN to these more complex systems like non-Newtonian fluids?
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
@@hyperduality2838 Your comment is gibberish. I can't tell if that is intentional/
@@danielkanewske8473 The neuroscientist Karl Friston talks about causality loops, he has some videos on UA-cam you can watch.
The external world of matter causes effects in your mind which you perceive -- causality.
Your mind (causes) can effect the outside world -- causality.
Your perceptions (effects) are becoming causes -- retro-causality or syntropy.
Perceptions (effects) are becoming causes in your mind -- causality loops.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
The thinking process converts measurements or perceptions into conceptions or ideas -- a syntropic process!
Your mind is therefore creating or synthesizing reality -- the syntropic thesis!
You can watch these videos about duality in physics, watch at 11 minutes:-
ua-cam.com/video/DoCYY9sa2kU/v-deo.html
And this at 1 hour 4 minutes:-
ua-cam.com/video/UjDxk9ZnYJQ/v-deo.html
Teleological physics (syntropy) is dual to non teleological physics (entropy).
Your mind is syntropic as you make predictions to track targets and goals -- teleological.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
From a converging, convex or syntropic perspective everything looks divergent, concave or entropic -- the 2nd law of thermodynamics!
Convex is dual to concave -- mirrors or lenses.
My syntropy is your entropy and your syntropy is my entropy -- duality.
Mind (syntropy) is dual to matter (entropy) -- Descartes or Plato's divided line.
@@hyperduality2838 reported
@@wesexpress3343 You can report this:-
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
The conservation of duality (energy) will be known as the 5th law of thermodynamics -- Generalized Duality.
Energy is dual to mass -- Einstein.
Dark energy is dual to dark matter -- singularities are dual.
Positive curvature singularities are dual to negative curvature singularities -- Riemann geometry is dual.
Space is dual to time -- Einstein.
Gravitation is equivalent or dual (isomorphic) to acceleration -- Einstein's happiest thought, the principle of equivalence (duality).
Duality creates reality!
Are you sure you want human scientists to be made obsolete by AI scientists? What would you do?
So far AI can barely answer very unsophisticated questions I ask without me repeatedly pointing out its errors followed by it apologizing and then giving me another incorrect answer of the same questions .. at least it is polite but I am learning not to trust it
❤ There must be honesty. Where is your nobility? Where is the honor? Where is the support? Where is the scientific interest and curiosity for new experiences? BIG ERROR in measuring the Universe, black holes, dark energy,... Let me judge all this by the result of a direct experiment, gentlemen of physics
Let's do the Michelson-Morley experiment on a school bus and determine the speed in a straight line - this is exactly the experiment Einstein dreamed of. Perhaps we will see the postulates: “Light is an ordered vibration of gravitational quanta, and Dominant gravitational fields control the speed of light in a vacuum.” There is a proposal for the joint invention of a HYBRID gyroscope from non-circular, two coils with optical fiber, where the light in each arm travels 16,000 meters, without exceeding the parameters of 0.4/0.4/0.4 meters and mass - 4,1 kg.
whats the difference between distinguished proofesor and regular one>? are you better and higher?
Both
@@DrBrianKeating distinguished higher than PHD?
@@frazerhainsworth08 totally different thing. I have a PhD and I was a Professor now I have a title Chancellor’s Distinguished Professor of Physics at UC San Diego.
proofesor ? that's the one who's proving the professor's theories
@@DrBrianKeating congratulations. do you go by Professor or Chancellor?
Yes that's a thing. Only thing im surprised about is how long its taking for people to talk about that. I mean, they're pattern recog machines. There are ways to exploit that, to make it extend internal data following highest probability candidates. And strangely i don't see papers doing that. Ive got plenty on this, if that's something of interest.
7:51 ❤ Квантовый компьютер, мог бы использовать эти формулы для симуляции крови и внутри клеточных процесов. А если мы захотим узнать все гипотетические числовые значения клеток в разные периуды формирования и перестраивания организма, запуская симуляцию, можно рассмотреть несколько логических вариантов действий: первый от икса и решить как математический пример, жанглируя всеми данными современного ИИ и второй способ уже на границе с фантастикой: получив, скажем 100 % симуляцию отдельно взятых органов или даже целого организма, ИИ с КК будет анализировать эти процессы на Планковских расстояниях и чтобы получить данные из прошлого, вектор симуляции будет со знаком минус😅❤
Will AI be able to find through data what Einstein found through thougth experiments?
a new frontier of science and discovery is awaking
Brian, I think I saw the flaw on your golf swing. And why you're having trouble flighting the ball properly. Even in a sitting down position I could detect it quite easily.
Should I send you an email? I don't want to get into a golf swing analysis here. But I'll have you finding the sweet spot and puring it through the impact zone n no time flat. Just a few simple adjustments.
Great video. Very VERY thought provoking. Holy Cow. Kinda mind bending possibilities here.
I taught it everything it knows. 💯
Is it possible to find new isotopes using AI?
Depends on when you let AI start and that it can go down. If you start at what we assume to be the reality of physics now you may just go down 'the rabbit hole' that math provides.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
Click bait title and thumbnail. Blocked.
What do you mean? Nothing clickbait about it.
@@DrBrianKeating Yes it is, you claim shit in your thumbnail then spend an entire video asking questions.
If you insist its not clickbait then, it’s just a bad format and like everyone else.
Blocked, unsubscribed.
I think @andyc8707 did not watch the video. However, if the time stamp chapters in the video referred back to the title; I think the message would have come across more clear. I enjoyed it! 13:23
@@forestfield5597thanks for confirming my point, clickbait, filler content and no real content. It’s a shitty format.
Maybe watch again then see what you think . I often find a second view can give an entirely different understanding .
I mean they're only going to get better and better. Why not have our future overlords have a quick skim over the grand total of our understanding, see if it finds anything we've missed.
imagine super-quantum computer + AI
I wish I was part of it. I want to merge with AI now!
A simulation of our simulation...
Pls do more podcasts with Abhijit chavda ❤❤❤
AI is just accelerated statistical analysis and LLMs are just really fast dorky parrots
Same as our brains.
Now I would challenge any AI to tour through all scientific genre AI is a tool only for a creator
just let ai read physics books and it will learn
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
I’m biased by the realm of biology, but computational power and algorithms only get you so far. One needs to start with a sufficient amount of high-quality, unbiased data. There has to be a sufficient amount to down out the stochastic nature of data and sampling, as well as sub group biases. (In biology, this is very often missing, so machine learning becomes irrelevant). Regarding LLMs, the scientific foundation fed into them also needs to be of sufficiently high quality, which again is lacking (the so-called reproducibility crisis, which is really systemic.) Typical research practices of using too few samples and not running replication studies doom the literature from an LLM perspective.
Syntax is dual to semantics -- languages or communication.
Large language models are therefore dual!
Categories (form, syntax, objects) are dual to sets (substance, semantics, subjects) -- Category theory is dual.
If mathematics is a language then it is dual.
Concepts are dual to percepts -- the mind duality of Immanuel Kant.
Mathematicians create new concepts or ideas all the time from their perceptions, observations, measurements (intuitions) -- a syntropic process, teleological.
Cause is dual to effect -- causality.
Effect is dual to cause -- retro-causality.
Perceptions or effects (measurements) create causes (concepts) in your mind -- retro-causality -- a syntropic process!
Large language models are using duality to create reality.
"Always two there are" -- Yoda.
Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics!
I’m still trying to figure out what the difference is between AI and the output of computer programs.
The main function of AI will be the Sentience Singularity...
something like this.. Imagine a world where we crack governance by full blown citizen assembly, every voice an equal say.
fast forward in time... Imagine talking to whales via AI... fast forward in time... Imagine talking to bacteria through AI...
I think there could and perhaps should come a time where all sentient life is connected 'our will be done'
For P Merriam and M A Z Habeeb.
(Getting ready to get cancelled for referring Punk Rock lyrics in 1, 2, NOW! : Dead Kennedy's - "California! Uber Alles, California, Chabaduda Eeeeee YAAA!"
There seems to be a new, more contagious, variant of the so called "Natural Stupidity" virus, just look at all those Flat Earther Constipation Theorists showing up all over the place...
What about subspace communication??
If the AI comes up with a method that is faster than current simulations, how do you translate what the AI model developed into something discernable?
That whole "black box" problem, I don't know how to deal with that.