To try out my new AI app here!👉🏻 app.mentourpilot.com You can also contact Marco directly if you have a serious idea you would like his help developing cvt.ai/mentour
About AI, this is not a "if" question, but just "when", and this day is coming fast. I would say, no more than twenty years. That is, for commercial aviation, because autonomous UAMs will cover the sky way before that. Those protocols are already in development for years.
@@forgotten_world I think there is still an "If" element, which centres around whether AI is the best solution towards automated piloting. Of course we are headed towards more automation in commercial aviation, but AI isn't necessarily the best solution for pilot replacement, in an industry with the framework that aviation has. We may be better off expanding automation technologies that work on defined processes, in the same way autopilot and autoland do. There is no reason why we cannot have an aircraft automated from the ramp of its departure airport, to the ramp of the destination airport, and never use an AI system. There is a lot of focus on AI at the moment, and it is a fantastic field that will doubtlessly become more prominent in society. But we need to use the right tools for each job. And automation without AI is probably far more suited to flying aircraft, at the very least until we can get AI to spontaneously consider scenarios, but even then AI probably isn't going to be the best solution. Where AI is perhaps going to be better suited, is in the role of traffic control. I'd wager you'll see an AI driven ATC long before you see an AI pilot (in commercial use, rather than in research, development, and testing programmes), of we ever see AI used for commercial piloting.
The idea you suggested for the plane to offer prompts where the pilot is doing something strange and pushing the plane towards an unsafe envelope doesn't require AI. As a computer programmer I'd say this would be an expansion of the scope of the existing hard coded envelope protection system with a second layer or soft mitigation (Soft in this regard as they are recommendation as vs the hard mitigations where the aircraft physically intervenes). For example say the pilot is experiencing a somatogravic illusion and forcing the nose up failing to notice the plane is actually slowing quickly. Perhaps instead of waiting until the hard envelope protection system kicks in a RED CLIMB message shows up on the ECAM perhaps accompanied by an aural "REDUCE CLIMB" alert. I feel like that wording also has the added bonus that it calls out what is the immediate problem that the pilots are likely missing here, as a pilot even if I doubted this it would call my attention to the ADI, Altimeter, VSI, and ASI all of which would confirm the yup you are actually dangerously nose high, climbing and getting dangerously slow. After all if I didn't believe I was climbing the first three will disburse me of that notion and the latter is going to indicate this is becoming a serious issue. So that little prompt could help break the pilot out of the confusion by giving and instruction that also calls attention to the right instruments that for whatever reason they are somehow just not seeing at this critical moment.
IKR!!!! Most posts on linkedin have sounded more doomed, as if AI is already so good it can take over everything or as if AI would revolutionize work the way MS Office did. But I think there might be a difference in which sector they are discussing, as the managerial people on linkedin tend to be more vocal. Aiden here is presenting AI for engineering/pilots which is a different use case than generating reports or making presentations.
As far as AI goes for advance informing a nervous flyer about turbulence, all a nervous flyer really needs is false reassurance from someone they trust. For my first flight on a commercial airline as a young teenager, my mother told me that it would be lots of fun - sort of like riding a roller coaster at times with ups and downs. The flight turned out to be what the other more experienced passengers told me later was a nightmare with lots and lots of turbulence. There I was smiling and enjoying myself the entire time because it met up with my expectations.
One of the reasons I always flew United was to listen to channel 9. Ride reports were extremely valuable insight on what to expect and the professionalism of the crews and ATC were incredibly reassuring.
Through the 90's I did quite a bit of flying on commercial airlines... AND from time to time we'd hit "pretty good" turbulence... I knew there were likely timid, nervous, or outright scared flyers in the cabin, so every time there was a "proper" bit of roller-coaster or similar effect, I always made a point to give a good "WHEEE!" and laugh... in two or three of them, there'd usually be a few others to join me, and more often than not, even some of the crew... I doubt anyone knew I was even looking around, but I saw more than a few shoulders start to slouch, faces slack from distentions and clenched jaws, and even a couple parents pointing at us "lunatics" and speaking to youngsters... I like to hope it was a boost in morale to a few of the more anxious among us... Obviously, I'd temper that with the sensible judgment not to be shouting when the lights are out and everyone's even trying to get some shut-eye or rest... Even (especially?) the anxiety-prone don't need a maniac (or several) shouting anything when the plane starts buffeting and bouncing... haha... ;o)
I find it eerie when the flight is very smooth. I actually enjoy the subtle turbulence along with some more bumpy turbulence at times. I don’t know if the false reassurance is a good idea though. If the person ends up panicking then it could create trust issue.
I think having ai as a second set of eyes for stuff like checking if pitot tubes are covered would be useful for pilots, or alerting pilots if the readings of sensors dont match
There is really no AI necessary to find out if values delivered from redundant sensors are inconsistent But informing the pilots that information is inconsistent leaves the pilots with the problem which one they should trust and which not - which causes a lot of stress and confusion. I think something I call "sensor fusion" would help in such situations a lot. For example one altimeter says that you are on flying at 20.000 feet while the other says that your are only flying on 1500 feet is a bad thing when flying over the pacific at night without any visual helpers. But including data from other sensors (like outside temperature or GPS) would provide a lot of hints which of the both altimeters is the one delivering wrong values. If the outside temperature is -10°C and GPS tells you that you are flying at 2300 feet than it is nearly 100% sure that the altimeter showing 20.000 feet is wrong and you should better trust the other altimeter. Or if one pitot sensor measures a dramatic change in relative air speed (for example a drop from 800 km/h to less than 400 km/h in less than one second) while the other one is delivering "smoother" values than the propability that the first pitot was blocked by ice or something else is high and in doubt it is better to trust the other pitot sensor. Clever guys could take all time they need to develop algorithms using data from the different sensors to calculate how trustable each individuel sensor is and inform the pilots which sensors they should trust in case they not have any better idea. For humans it is very difficult to perform such considerations in a stressfull situation with limitted time to make a decision but a software could do such (and even much more complex) analysis in the fraction of a second - without any AI needed...
@@muenstercheese Most of the things AI does are possible via other means. The whole point is that Generalized AI (or something close to it, like ChatGPT) makes those things orders of magnitude easier.
Even if accidents were “drastically” reduced (which would be incredibly hard to do, given that aviation is extremely safety conscious now), all it’ll take is one crash, and people will be screaming for pilots to be back in the seats. The MAX crashes taught us that (yes, I know there were pilots, but they couldn’t override the automation).
That might happen - or not. Assume that 25 % of airlines are removing their pilot by some software and after 2 years the statistics would show that they have 90% less accidents than airlines flying with human pilots when even a single accident may not change the mind of passengers to scream for pilots as long as the "total picture" shows clearly that it is safer to rely on a "software pilot". The MAX crashes were (in my opinion) caused by human errors - not errors done by the human pilots but errors from the guys at Boeing building a software relying on a single (Angle of attach) even they were aware that sensor could fail. The minimum requirement would have been to compare the input of multiple AOT sensors and if they are not consistent to turn of MCAS and of course inform the pilots how they should react in such a case.
@@endefael There were two crashes related to MCAS In the first one the cockpit crew was not informed about MCAS and therefore could not override it. In the second one the crew did know about MCAS and how to "override" it by manual turning the trim wheels - but this was physical hard for the pilots to do and also take a lot of time - in the second accident it took to much time.
@@elkeospert9188 I am afraid you do not understand completely what the MCAS is capable of, what that failure looks like, what are all the contributing factors, and the huge deal training played in both accidents. Just to give you a hint, in none of the two crashes the crew performed all 4 or 5 basic memory items they were supposed to. I highly encourage you to study them more deeply, and you will see that automation did not have the capability of overriding them by itself. It just took them too long to take the appropriate actions in due time - if taken at all. Not judging them, as individuals, but the fact that they were exposed to it without being fully prepared to. Not saying that the acft couldn't have been improved either, which any human project, including the 737 can. But it was never simple as the MCAS existence or failure: no accident happens for an isolated fact.
You could absolutely use narrow AI to fly a plane if you input enough instructions. The problem is like driverless cars is that it would make an inexplicable decision that would kill everyone. A plane can already take off and land on auto pilot as it is. You just don't want it to make decisions in dangerous situations, which is why they also eventually banned driverless cars.
As someone who works with AI on a daily basis, this basically hits the nail on the head. AI did not replace anyone in my team, instead it took over the mundane repetitive work which is the longest part of a project, freeing up my team to focus on the final finishing portions of the deliverable. AI does 80-85% of the work in less than half the time, making my team more efficient and allowing us to take on more projects with the same staff. We refer to it as AI/Human hybrid where the AI is more of a partner to a human rather than a replacement.
"allowing us to take on more projects with the same staff" So there are fewer projects for other people to work on. If you do more work with the same people then couldn't you do the same work (not more) with less people?
@@avisparc I was going to say the same thing as well. By reducing number of man hours needed for a given project it has indirectly decreased employment. However, this will mean significantly reduced costs which might allow more demand. This will make the task of deciphering the changes in employment due to AI a bit more complex.
Except it doesn't explain AI, it explains what a semantic database and Generative Pre-trained (the GPT in ChatGPT) is. What we currently have is still not even A.I., is just the interface for a semantic database.
Indeed, I've been working in AI for years too, and I really appreciated Marco's intervention. There is much buzz around AI, and it's pleasant to see people like Marco clearly putting dots on the i-s
as for AI taking over control of the plane, there is ONE situation i can think of, if you remember the disaster with the plane that flew in a straight line for a while before crashing near greece with both pilots out, introduce something like the check in trains where the pilot have to confirm 'yes i am still awake and paying attention' if a pilot misses for instance two in a row, it could make sense for the 'ai' pilot to bring the plane to a low altitude and broadcast a automated mayday, and then if still no pilot response it can make sense to then try and land the plane.
You do not need "AI" specifically for a last ditch solution like that. Garmin already offers a solution called Autoland for GA aircrafts where in a case of pilot incapacitation a passenger can press the emergency button and the plane will land by itself at the nearest suitable airport while making the proper mayday calls to atc. It even displays instructions to the passengers if they need to talk to atc
@@flyfloat That's fine if there is someone conscious to press the button. With Helios 522 and the Payne Stewart Learjet crash no one was conscious to do that. They are not the only examples either. As a last resort AI could have been very helpful to initiate autoland in these 100% fatal accidents.
I love this channel! So excited when I see a new video in my feed. I’ve recently been on my 3rd binge rewatching all of Mentour Pilot’s videos the past few weeks. Thanks so much for the fascinating entertainment and information!
I think this is completely wrong. If you approach the issue from the standpoint of what information is available to support AI, the clear answer is yes, even in Sully's situation. The autopilot already flys the airplane. Altitude and heading can be changed by turning knobs. The computer can turn those just as trim is changed. Add Jeppsen charts, digital RADAR input and ground controllers replaced by AI to 'talk' to the AI pilot, all thr information for safe flight is there. For the Hudson river, again, consider the information available. Engines failing, altitude, heading, distance back to Laguardia are all known. AI would know it had to land, but where? A new problem for AI, but again, consider the information available. To me, searching for clear fields, beaches, etc. are a common enough problem to be part of AI from the start. GOOGLE MAPS has that information now. My background is computer tech. Final thought; I served in the Air Force during the war in Vietnam. The Hughes Mark 1 fire control system in the F106, could almost fly a complete shootdown mission, and that was 70 years ago. AI replacing pilots and ground controllers is a lot closer than you think, and I'm not happy about that.
Thank you, thank you, thank you Mentor for bringing someone on like Marco who gets it. I studied AI at University in the 80's and I am very much a Turing purist. I have worked with systems that I would categorise as 'advanced analytics' and 'machine learning' and have had rows with people who said that Turing's opinions are out of date after I accused them of re-badging their tired old software as AI to charge more money (which they are). Back on topic, who is most scared of of the current form of AI? The mainstream media are. Why? Because AI will start to present unbiased news feeds and put them out of work.The vast majority of the negative press is being generated by the press.
Don't worry, in the media you can create AI being politically correct too. It's a matter of rule settings to follow. It's not going to become better, but even worse and far more refined. 😉
That was the best and most honest AI discussion I have seen to date, I get so fed up with people banging on about how AI will damage the job market - they often get upset when I point out AI cannot sweep streets or fill shelves in Tesco so they will always have a job - but seriously, AI is a database algorythm and nothing more, Marco explains that so well and from an inside perspective people need to take note. I will be sharing this video on Facebook and LinkedIn because this discussion needs to be heard by millions so they understand what AI can do, but more importantly, what AI cannot do. Thanks for a briiliant interview.
Having the opportunity to participate in this interview with @MentourNow was an absolute honour and pleasure. I am both impressed and grateful for the amazing comments, perspectives, questions, and debates I see here in the comments. They truly are a testament to the quality of this community. I am especially thankful to those who have provided feedback and pointed out areas that I intentionally did not elaborate on during the interview, as well as suggesting improvements to my phrasing. I agree with those highlighting that I shared a somewhat oversimplified version of the subject matter, as I briefly mentioned at the start of the video. This was done intentionally to make the conversation accessible to as broad an audience as possible. However, for those wishing to delve into the nitty-gritty details, I would be more than happy to elaborate in a thread following this message. I will be tagging the most intriguing comments, but everyone is free to join in.
Elevator Operator: From the point of view that I shared in the interview you can argue that its the evolution of a job. However, as you rightfully pointed out, there is another point of view which says that few to no people today have a job title "Elevator Operator". AI & technology can most definitely and has made certain specific Job Titles redundant. But if we elaborate on that perspective lets perhaps dive deeper into what did it actually make redundant. It took over repetitive and potentially unfulfilling jobs making it so that people who previously might have considered becoming an elevator operator now need to consider becoming an elevator engineer. If we take a look at the fluctuations of employment rates throughout the entire 1900s where technology evolved and automated more things than ever before we will notice that not only did it not go in a growing trend, but the quality of life of everyone in the world has steadily increased. The perspective I'm sharing is that automation has been only good for mankind and there is no reason to believe that it will change. After all it is us who chose to create it, we are the creators of technology.
Unmanned Aircraft Topic. As many of you have pointed out, there have been unmanned aircraft (such as drones) that have successfully been deployed. These however, are not flown by AI. Some of them might incorporate some elements of AI such as Face Recognition and many more. However, they are operated and dependent on code instructions which, as I pointed out in the video, are much more efficient and reliable for this purpose. Hence the answer to the question "Can AI fly a plane on its own" remains No. However whether automation, especially if properly leveraging AI, will be able to do that is a different question. One that I would still answer no to today but with less certainty than if it were only with AI.
Generative AI vs Other AI. One of my primary emphasis for this interview was to address the fear-mongering misconceptions that have been irresponsibly spread by the media and that have, for the most part, been centered around Generative AI. Unlike previous breakthroughs in AI, ChatGPT became an overnight sensation and a lot more people have heard about it than any other AI breakthrough. Now, I stated that AI, in an oversimplified way, could be described as fake it till you make it. And I will stand my ground on this one by further elaborating it. When I chose to use the sentence "fake it till you make it" I explicitly did so in an attempt to translate one of the core principles of all ML models: Approximation. One of the incredible things of many ML models is how you can generate a desired output from an input of many functions without needing to know any of the logic inside the function. This is, a foundation of AI/ML. And it is a principle used in just about all types of models, from Classifiers, Regression and Neural Networks to Transformers, Entity Extraction, and many more. I believe that so long as any AI that we develop is driven by this approximation principle we will simply not achieve anything other than Narrow AI. And most definitely we will not achieve consciousness.
@@yamminemarco Some AI models try to simulate various biological processes. Like how the brain works, like how a hive of ants work. Approximation also happens in our won natural intelligence. But even the perfect emulation of human brain/mind would have no advantage over the human brain/mind. And this is why I don't see general purpose AI. And copying the weakness of human mind, and creating an "inferior human emulator" in the cockpit wouldn't be the best option either. Even if GPT is designed in a way that would give it the best chance to beat the turing test, even with very large databases, lots of data it is bad at many tasks. Including procedural generation of game content even for tabletop gaming. We can get back to this point and my experiences with using GPT for this purpose, but the issue is what you described: It tries to use the "best option" one step at a time, it doesn't even consider its own generated answer as a whole. And it often doesn't understand which criteria from the prompt is relevant and important. But I think that ChatGPT and MidJourney isn't suitable for production environment yet, but trying them while gaming, learning prompt engineering, evaluating these options is a much better option. Your claim is that Midjourney doesn't see airplanes without windows. But some large High Altitude High Endurance fixed wing drones are in essence airplanes without windows. But Midjourney (and its language model) doesn't understand that fixed wing UAVs are a variant of airplanes, and how it can use the information. Please check the following futuristic image: cdn.midjourney.com/618fa0bb-c4b4-453d-b2ac-d5dc9850e007/0_2.webp You tried to engineer a prompt to render the plane without front windows / windscreens, I have tried a different approach and my prompt engineering resulted in picture with a plane, with two jet engines, but no windows, no windscreens. No new, better trained model, and my approach still worked. Creating variants, using the blend command, using the describe commands to help with even better prompts... I am sure with enough prompt engineering we would get far better results. Approximation isn't an issue because it is used by natural intelligence as well, and ability to approximate the results of any "unlikely" option is important when we want to invent new solutions. Approximation is the shortcut that makes HUMAN intelligence possible. So, when I used MidJourney I seen how it uses some random noise, and try to turn it more and more into an image we want to see. We have multi-prompts, can prioritize the importance of prompts. If in addition of "front windows", I would also mention the word "Windscreen" and in priority I give them a -2 or -500 or just no... It was easy to use more and more prompt engineering for better results. But due to economic crisis I don't have money to finance a lot of experiments with MidJourney, but I think discussing prompt engineering here would make sense. But when I started to learn about AI it started with an extremely simple algorithm, yes it was the minmax algorithm and it has plenty of derivatives. It would use plenty of resources and it should be optimized by prioritizing which options should be checked first, we would need to make sure once if found a "good enough" solution it wouldn't waste endless amount of resources on other things, if an option would be a dead end, it should move to the next one. So, if a machine learning algorithm can approximate the results of some potential actions, it is well trained and reasonably accurate it can quickly identify options that shouldn't be considered by the minmax option or should be only considered only as last result. Minmax and its derivatives can think "several steps ahead" and this is how they would choose the best options. We would have different kinds of algorithms (some of them with machine learning, etc) to evaluate various scenarios, etc.
As a season software engineer I can say that *some* aspects of the flying will be supplemented by AI. The autopilot, auto landing, technical malfunctions checklist items, memory items, all make perfect sense. Beyond that I do agree with your assessment of a fully automated cockpit being a generation away.
@@NicolasChaillan but the analogy applies to any AI. What you don't understand is that, humans are exceptional in their "capabilities". Whatever that means
@@Oyzatt Clearly you don't understand what the technology is capable off and what we have already done at the U.S. Air Force and Space Force. I was the Chief Software Officer for 3 years.
@@NicolasChaillan let's crystallized everything here for the sake of clarity . Ai is not creative like humans, it's simply regeneration what has been feed in it, that clearly shows it boundaries. In the military space it can be capable of many things but not cognitive stuffs, if you'll agree with
Getting cross-Industry insights associated with Aviation is fantastic!. Would be valuable to learn, watch such discussions/exchanges. Amazing break-through Peter!. Congratulations. Warm greeting from Germany. Vielen Dank!
Artificial intelligence is not and never will be. This was so good I shared it in my LinkedIn stream because every other post is some BS AI-related fever dream.
Computers running normal code (not AI) will be more useful because of the predictability of its output. AI can be useful as a user interface (voice recognition) and general awareness. One example would be an AI that listens to all the ground control transmissions so it is aware that you are supposed to turn on a certain taxiway and reminds you if you are starting to turn at the wrong one, or if you are about to cross an active runway but another plane has been cleared to take off or land on it by a different controller. Miracle on the Hudson is an excellent example why I want a human pilot flying any plane I am in.
Another aspect of programmed automation is that you can tell the pilots what the parameters are for "decisions" made by the automation so that they understand exactly when the automation is outside of its parameters and thus have more information about whether to trust it or not in a particular situation.
I work in tech with ML/AI at various points over the years. Agreed with the assessment here, it’s really nothing to freak out about. The reason people care is because there are a lot of extremely wealthy “entrepreneurs” who want to use it to make money even easier than they do now. LLMs like chatgpt will have their uses, but it is not the revolution everyone is afraid of.
I frankly disagree, it is an amazing enabler capable of revolutional acceleration in productivity and value generation, both in productive and recreational activities. It is the new steam engine of XXI century. On the other hand I cannot say anything about dangerous outcomes. There are potential problematic scenarios indeed as described in Bostrom book superintelligence
10:00 the discussion about the feedback loop and ability to identify weak points and source of errors was one of the eye openers of the essay "they write the right stuff", on the US Shuttle Software Group (pretty much the only component of the Shuttle program which got praise after the Challenger Accident), the shuttle group's engineering was set up such that *the process* had the responsibility for errors, in order to best make use of human creativity and ingenuity without having to suffer from its foibles. Any bug which made it through the process was considered an issue with the process, and thus something to evaluate and fix in that context.
Love Mentour videos and they are always very well documented. The guest in this one seems to me not quite as an expert as I hopped. He is just stating the oversimplified view of AI that seems to flood internet these days. Here you have a few comments if I may: * AI is not just ChatGPT. GPT architecture is just one of many (BERT, GANS, etc.). Many of these are not that visible as ChatGPT but we have been already affected by AI at large scale (Google translate, UA-cam suggestions, voice recognition, etc.) * AI is not just a database system. In the process of deep learning the neural networks are able to isolate meaning (see sentence emebeddings, convolutional neural networks, etc.). AI is able to cluster information and use it for reasoning and I can give you many examples. GPT does not only generate next word based on previously generated words but it is also driven by the meaning constructed in the learning process. Actually it does not even generates words. It generates tokens ("sub words"). It is not a probabilistic system. * AI could land an airplane without any problems if trained so. Full self driving AI for cars is far more complex problem and it is amazing what the current AI systems can do (Tesla, Mercedes, etc.). But as somebody said, the first rule of AI is: if you can solve a problem without AI then do it that way. AI is not very efficient at training. Currently we can fly airplanes without pilots without using AI (autopilot, auto-landing systems, etc.). On the other hand, replacing pilots completely will not happen any time soon even for the simple reason that people will not board a plane any time soon without a pilot. But it is creeping in. As it was mentioned in a previous videos the industry is moving from two pilots to one pilot. * AI will replace jobs (and it will create new ones). One example is customer support with all the robots that answer your questions. What do you think Med PaLM-2 will do? ... ;-) One thing I agree with the guest. AI is an enabler for new opportunities. Also, good idea to bring aviation in the AI discussion.
Awesome vid! Airbus already uses machine learning (the basis of AI) in order to better engineering and airline fleet operations. They work together with a company called Palantir and they created "Skywise" for this
This channel is on another level. This is kind of content we need (and want as well 😄) . Thank you so much Petter and Marco. This was truly informative.
Excellent video, thanks for bringing a real expert on the subject. As a mathematician working as a software engineer I am so happy to see a voice of reason in talking about what we call AI. Don’t underestimate automation though, I am mind blown by Garmin autoland, I think that we might see similar automation systems in commercial aviation at some point, so I wouldn’t rule out a single pilot operations at some point in the future.
Even if it would fly the plane perfectly 99.99% of the times it would be devastating the 0.01% of the times it fails due to the weird hallucinations AI sometimes have.
Airplanes spend most of the time flying at high altitudes. They often have minutes to correct the problem. The 0.01% is not necessarily or likely to be devastating. Closer to the ground of course you are correct.
Well, that's the tension here. If it fails to save United 232 and Sully's flight, but it doesn't crash in AF447, Lion Air and Ethiopian MCAS accidents, Colgan, AA587 at New York, Helios hypoxia, Tenerife, PIA gear up go-around, AA965 CFIT at Cali, TAM 3045 at Conghonas, and a long list of accidents that were either caused by the pilot (due to distraction, confusion, spatial disorientation, disregard of procedures, fatigue, etc...) or that were not avoided by the pilot when the pilot could have avoided it just by following procedures (MCAS accidents, AF447...), is it worth the price?
@@playmaka2007 right? This channel is full of examples of pilots hallucinating, especially in stressful(high work load) situations, something AI doesn't ever have to deal with since it can't stress.
GPT models contain a world model- they are capable of performing calculations and keeping track of state. They can play games they've never seen before. It's not as simple as just accessing memories. It's accessing abstract concepts and using a predictive model it has developed thay can reliably predict information, and the only way to do that is to actually process the information. I.e., it's not overfit. It can actually perform reasoning. It does have abstract ideas about what things mean. However, its entire world is text, as seen through the lens of its training data, so of course it currently has limitations.
@Windows XP The news media and others exploiting AI tech by saying "AI is taking your job" or "AI will control everything" when in reality, as explained in this video, AI really can't take control of anything...just yet.
Thank you very much for this really informative interview which clarifies what AI is and can do - and what AI is not and cannot do!👍 That´s a core point of knowledge, not only in the Aviation Business.
AI can help pilots in automation of routine operations - like routine cabin announcements, early warning of turbulence etc, verify and implement various checklists. Send automated communications to control tower and vice versa, etc there is no way AI will not be integrated in a cockpit in near future
There was a good point made: an AI pilot assistant must have a way to strongly signal to the pilots whether it makes a helpful suggestion that the pilot may overrule or whether it does an emergency interjection that the pilot must simply have faith in. Like when the TCAS wants to step in to avoid an imminent collision.
In light of one of your recent videos, I just realized AI might be very useful to parse the relevant parts of NOTAMs, and maybe even remind the pilots as the flight progresses.
I'm an AI researcher, and I always struggle to explain why we are not talking about sentience, but basically big prediction machines. Marco did a great job there! Thanks for bringing an actual expert :)
Being a data scientist and aviation enthusiast, A situation such as Sully can definitely be implemented in AI in the form of recommendation system or disaster managing co-pilot system where system can quickly identify dual engine failure and determine shortest route available to nearest airstrip. This however requires intensive training on large simulation datasets and would involve multiple countries across the world. The model inference also would require extremely powerful computers on board to process such large streams of data quickly which might shoot up cost of airplane. SO, theoretically it is possible but practical implementation likely wouldn't be possible anytime soon.
8:42 I don’t agree that it doesn’t have accountability. You can tell it that this areas of the ground is “people” and this airplane is “people”, if “people” suffer than you “lost the game” - it will then analyze all the available variations to save people, it has “accountability” in real-time, you program it. Also, I don’t agree that AI can’t stop - giving ChatGPT as an example is just wrong/uninformed. You get the probabilities and you can set a threshold, if it is insecure about something, you can pass it back to the pilot, just like regular Autopilot just using probabilities and gathered knowledge.
I think you’re confusing conscience with accountability. Think of it more as in legal liability, and being able to explain why it took certain decisions based on actual understanding of inputs and systems at play. Until the developers of “AI” models can consistently and repeatably fulfill that condition, you will be hard pressed to see companies, manufacturers and any other entity who may be under legal liability to allow implementation of “ai” as it is today in direct flight operations.
This was a great discussion on the whole "AI' marketing going on right now for something that isn't even really AI. Speaking as someone who is in tech field, current AI is nothing more than a messy, giant, computer program that must always answer your question, it doesn't even have to be a truthful answer.
Thank you for the post, my teenager is interested in becoming a pilot so I’m grateful for your opinion. It would be boring for a pilot if the co-pilot is eliminated, humans need people to talk to at work, especially now that the cockpits are sealed.
This video is spot on and agrees with what most other experts in the field are saying. One thing that wasn't really touched on this video is the difference between an autopilot and an AI controlling the airplane. When autopilots were created, there was similar concern about pilots being replaced because, as the name suggests, the point of an autopilot is to control the airplane automatically. Autopilots are advanced enough that taxiing around the airport is about the only thing that isn't automated. So, one may be tempted to ask: why doesn't autopilot steal pilots jobs, and what does AI bring to the table that could threaten their jobs? The answer is that autopilot doesn't have the decision making capabilities necessary to safely fly an aircraft, and the AI of today isn't advanced enough to have that, either, as this excellent video explains. Just because we can automate the mechanics of flying an airplane doesn't mean we can automate the decision making behind why we fly a particular way or follow a certain set of procedures. An autopilot might very well be able to fly an ILS approach more accurately than any human could, but that doesn't mean it understands when it needs to fly an ILS approach or how to set up for one. As the video explains, AI is also incapable of creative thinking, it's only able to take what it has seen before and apply that to the situation. This understanding of why we do things and what makes some action appropriate or not is the crucial element that is missing from these automatic systems, whether they be rule based (autopilots) or machine learning based (AI). That said, some use cases that AI could be used for, in addition to what the video explains, include improving flight planning and communication with ATC and other pilots. For flight planning, consider all the NOTAMs, METARs, and so on that pilots have to sift through. It is a lot of information that is usually not formatted in a way that is very human readable, and even when it is, pilots still have to pick out what is important and relevant to their flight. AI could parse through all that information, give pilots a summary of the important information out of it, and even suggest alternate routes if it were paired with weather and traffic prediction models. That could be a way in which ATC is helped out, also: helping choose the routing for flights to maintain traffic separation and expediency. Of course, any such tool would have to be thought through carefully. Pilots would still need to go through the materials to check that what the AI said is correct, but at least they would have an idea of what to expect which might speed up the process. Still, Mentour has done videos on confirmation bias contributing to accidents, so pilots would need to be trained to use these tools effectively. Another use of the tool could be in communication. Paired with radio or text based interfaces, these models could assist in translation when non-native English speakers or other languages are being used with ATC, which could improve situational awareness and even clear up miscommunications. Again, care must be taken, since these models could also translate incorrectly, but there are other translation/speech recognition/text to speech tools that could be paired with AI to reduce that risk.
This was a fascinating discussion. As a pilot for a major airline I spent many hours in a simulator preparing to employ procedures learned from generations of pilots. As a technical rep for pilot associations and for my own interest I spent many more hours studying accident and incident reports and hopefully learning from them. I spent many hours in the air seeing how another pilot managed events. Like just about every pilot I spent even more hours, often over a beer, talking about aviation events I had experienced. In those ways I built a base of knowledge that stood me in good stead when I had to deal with events myself. This process, although less structured, resembles the building of a knowledge base on which an AI depends. Certainly one can point to incidents that an AI would find difficulty in handling although I'm not sure that the 'miracle on the Hudson' is one. I can imagine that an AI would have the knowledge that when no other option is available put the aircraft down in any space clear of obstacles within range. The QF32 incident of an uncontained engine failure might be more difficult since the crew there had to ignore some of the computer generated prompts. QF72 would also be unlikely to be fixed by an AI since it involved a system fault combined with an unanticipated software failing. So I agree that there would be situations that an AI could not resolve. But would they be more than those that pilots don't satisfactorily resolve? Possibly not. It may be that even with current technology the overall safety, measured as number of accidents, would be improved. However there is another issue. Would passengers fly in an aircraft with no-one up front? I know many people who would not. But I also know people who would choose the flight that was 10% or 20% cheaper. And of course there are the non-flying events that a pilot is called upon to resolve. I can't see any current AI finding an ingenious way to placate a troublesome passnger. I found that walking through the aircraft after making a PA announcing a long delay was far more effective than the PA alone. Just seeing that the pilot had the confidence to show their face made pax believe what they were told. I regret that some of my ex-colleagues didn't believe this. Something that does worry me and which is not yet down to AI but is already a problem and would be made worse by AI is skill fade. The less a pilot is called upon to exercise his skills the more likely it is that they will not be good enough when called upon.
They could make the flight 50 or even 90 % cheaper, I wouldn't buy a ticket for a flight that has no pilot and first officer flying it. There is something about this psychological factor of fear. Although we know there are far more people who get involved in car accidents than plane accidents, never the less the fear of flying remains because it is mostly about the knowledge of being powerless to do anything what so ever in a plane when something goes wrong. This is why to trust artificial intelligence to do the job of a human mind would be a step too far for me. I guess it is not about how accurate artificial intelligence may become in the future in recognizing a situation and finding viable solutions to problems, but rather my own sense of trusting a human mind much more, since it works similar to how my mind works. Currently AI is not working the way the human mind works, it only imitates some aspects of the human thinking process.
@@evinnra2779 As I said, I know people who think the way you do, but I also know people for whom the price matters more. If the operators see more profit then they will pitch the price to maximise that and employ whatever marketing they need to.
Go fly in Asia or Africa ...it's an unpredictable thing , Weather wise and ATC . An sometimes sub par Local Pilots / Maintenance Standards. Been there done that many times.
@@giancarlogarlaschi4388 I've flown in Africa and Asia. The unpredictability is no worse than other places. I used to tell my copilots that the one thing you know for sure about a weather forecast is is that it's wrong. It may be a little bit wrong or it may be completely wrong. The lowest fuel I ever landed with was at Heathrow thanks to a surprise. In the USA the weather can be unpredictable. I landed from a nightime Canarsie approach in a snowstorm. The next day we strolled down to the Intrepid in our shirt sleeves.
Thanks for a video without the hype. Backing in the 1980s, I was thinking of getting a PhD in AI. So, I have continued to follow the development of the field. So, kudos for finding an expert in the field who gives it to us straight about what "AI" is and is not. It's hard to predict whether true AI will ever exist. Nearly 40 years after I left school, we don't seem to be that much closer to building an actual intelligence, There could be a breakthrough today or 100 years from now our descendants will be trying to figure out why we wasted so much time on this dead end. As it usually is in life, what actually happens will be somewhere between those extremes.
It seems like every crash video boils down to, “but the first officer got flustered and didn’t realize that the down stick was in doggy mode and the autopilot had defaulted to protect the kitty mode so it did X”, a computer can be programmed to KNOW all of that and NOT miss the checklist item that needed to be reset to “A” mode, or whatever. For instance, in the “Miracle on the Hudson”, one of the cameras (or some other form of monitoring) that was watching the inlet to the engines would “see” a frickin’ goose get sucked into that engine, would automatically, in seconds, determine if it was too damaged to be restarted (and, if not, try the restarting checklist), know exactly what the flight characteristics of the plane were going to be from then on, realize that returning to the airport was going to be a no go, detect emergency landing options, decide on the Hudson, contact the field, air traffic control, the police and the coastguard simultaneously and instantly while informing the passengers and crew of what was about to happen and maneuvering the plane around for a water landing, making sure to land at the ideal angle to avoid having the plane break up, if, indeed, that was the only option. It is entirely possible that the plane, knowing that the starboard engine was completely out, and knowing EXACTLY where the other air traffic and ground traffic was (and was going to be), would INSTANTLY throw the plane into a maximum bank, while deploying flaps and re-lowering the landing gear and broadcasting appropriate messages and crew and passenger instructions, pulled out of the bank and landed the aircraft back on the field; a feat that no human flight crew could hope to achieve in that amount of time. Or, who knows, since the subsystem of the expert system that is involved with vision wouldn’t have been occupied with everything else that’s involved in getting an airplane into the sky, and could also have a much greater field of view than the pilot it may well have noticed the flock of geese and modified the takeoff profile temporarily to avoid them. I’m a huge fan of pilots, but I will say it again, modern aircraft have too many disparate systems each of which has a billion different (but eminently knowable) states and settings and errors and things that can go wrong with them. It is too complicated for a human pilot and actually needs to be either GREATLY simplified, or completely under the control of a triple redundant computer system, IMHO.
This was sooo informative. I am retired so I haven’t worried about AI taking my job (what, it’s going to sit in my hammock?) and therefore have not paid a lot of attention to it, but now I feel I have a pretty good understanding of what it is and what it isn’t, what it can and cannot do. Thanks, Mentour!
I really disagree with many claims of your expert, but I will just emphasise one. He said that AI doesn't understand what it is talking about and simply uses things it was trained on to predict the result. Of course he didn't give any arguments to back it up and just said it as a given. I've got two examples to show that AI can think and understand. First is the famous move 37 done by alphago. This model was trained on millions of game but hasn't seen such a move because no human has ever done that. So in this case you can't say that it just combines things it saw. It understands how the game works on deeper level than just simple rules of the game. The second one is example from chatgpt 4 "Imagine that I put a stone in the cup. I covered the cup with a cutting board, then turned the cup upside down with the cutting board and placed it on the table. Then I pulled the board out from under the cup and then picked up the cup and dropped it on the floor. Where is the stone?" It answed that you pulled the board out from under the cup and the stone probably fell on the table. When you picked up the cup the stone was left on the table. To answer that problem you have to really understand relation between objects mentioned in the question and to some degree understand physics. I have no idea how one can claim that in this example AI just combined sentences it saw when learning and predicted the next words without understanding anything. I really like your videos about aviation and learned a lot from them as a hobbyist but I hope that you will also invite experts that disagree with your opinion (and there are a lots of them in the field of AI).
As we intended to make the video relevant to a broader audience we intentionally oversimplified it. I've created a main comment and thread where I've elaborated a little more and you are welcome to join in.
In order to implement AI into flying aircraft I think there has to be a major overhaul and rethink into the entire way of flying and operating the flight deck. Airmanship is almost entirely based on previous experiences. Also, there already exist systems for detecting the wrong runway, wind shear, terrain ahead, etc, so these don’t need to be redeveloped and replaced by AI. As for AI helping in emergencies, a lot of emergencies require quick action and muscle memory from the pilots. To introduce a third “person” in the flight deck might be seen as too much interference. It also risks clouding the judgement of the pilots as they themselves will have reduced situational awareness.
As a programmer myself, I understand where the guest is coming from and I agree with a lot of being said. However, could an AI model, trained on the entirety of human knowledge to date in aviation, all scenarios and outcomes, become an indispensible crewmember who would notice when CRM is broken and act as a voice of reason, when humans are in an upset? Could it notice from the instrumentation and the inputs that whatever the crew is doing is making the situation worse and start shouting at the captain to reset their head and check the flight directors or something constructive? In the case of rapid decompression and both pilots incapacitation, could it achieve level flight at a low altitude? Could it take over the boring tasks of making sure the correct radio frequencies are being used, communicating with the tower and taking some of the predictable workload in the cockpit? I think it could! I genuinely think AI could make aviation safer, in its current state, if used to its greatest advantage - having access to a lot of things that happened and how to solve them.
Time-critical weather prediction/modelling is mathematically one of the hardest things imaginable. So much data and its like a 4D fluid dynamics show on steroids. With positive and negative feedbacks all going on instantly. AI will be a significant benefit here.
For those going "oh no, improvements in AI will steal our jobs", unless your job was obsolete to begin with, or doesn't require technical experience to improvise, you have nothing to fear. Don't believe me, look at the advent of the calculator, people actually thought that it would make mathematicians obsolete, but a few decades on, and that hasn't happened yet.
My take is that at least in the first gen, it wouldn't make any inputs that are un-commanded by the pilots, instead it can be integrated with sensors and make quick suggestions in emergency situations "And explain right away why he thinks that is the correct solution" Now the pilot has time to analyze the suggestion and think for himself if the suggestion is valid and or applicable or not. So no worries about AI "taking over airplanes" but instead I think pilots should welcome the idea so long as it could act as a guardian angel to make suggestions as needed, but never actually take control. Cheers
Oh, I like how Marco defined AI. This truth is never revealed because they know we will start distrusting and knowing that AI is not actually intelligent (at least not yet). As he said, now I understand that what the AI companies are trying to do is to "FAKE it until [they] make it!"
I am in my mid fifties and when I was a kid I remember growing up you had a telephone that was hooked to the wall and it was a Rotary dial meaning you had to wait for the dial to come around to the number and then return But you had to remember people's phone numbers either by heart or you had them written down which kept You engaged in thinking and remembering things. I realized when I got older I didn't remember as much because it wasn't a necessity. We became dependent on computers to tell us instead of having to remember and look it up in a book or think about it.. I think AI is OK for some things but there's just some jobs that are always gonna need human interaction imo. Great video as always.
I was a controls expert. I was able to automate multiple processes and reduced humans on a shift from 6 or 7 to 2 or 3. We still needed to maintain humans on watch to protect the city when a sensor went down. I could provide alarms for damaged sensors but, couldn’t provide controls for a damaged or misreading sensor because it may fail in thousands of ways.
There's also an element of trust, the majority of people wouldn't trust an ai flying a plane they're on nearly as much as a human pilot, no matter the depth of the autopilot assistance
That'll be true until AI earns the trust of people. But people will also need to learn that each individual AI, even arising from the same source code will be as individual as humans are. The only times an AI would be the same as another is when the algorithm is copied, but otherwise there is no guarantee and even a likelihood that every trained AI will be completely unique and can't be counted on to be like any other as much as you as an individual would be like anyone else on earth or has ever existed.
@@tonysu8860 I don't see how that's a good thing, if we're going to entrust hundreds or thousands of lives to an ai, I'd kinda hope it to be well documented and reliable. I for one will never be happy in an ai piloted plane. My main argument for this is the fact that there a size of program that once you pass it, literally nobody knows about everything it does and how, with bugs, exploits, and issues all over the program. And that's not even for self learning ais.
One thing that people need to understand about Sully is that he was not only a very capable pilot with a very calm and grounded personality, he was also good at playing "what if" games in his head. So, I suspect that he had already contemplated this scenario and came up with some alternatives.
My wife is somewhat slowly learning how to drive; she has problems with perceiving distance, speed, and direction (if she’s supposed to turn left, and it’s a new trip, she may well turn right) as well as mental mapping. She can drive the car fine so long as I’m co-piloting. Anyway, she often will ask me at stop signs, and unprotected left turns, “can I go?”; meaning, is there enough space from the oncoming car to go. Usually when she asks me this, if I was driving, the answer would be yes. But what I always tell her is: if you aren’t sure, the answer is no. I will then tell her once she has gone, I would have gone back when she asked, but she has to judge herself. I also firmly believe that I have the ability to tell her not to go, but that it is dangerous, sitting where I am, on the opposite side of the car from usual, to make the positive decisions for her. These are also dangers of AI.
You've neglected two entire new areas: 1. Military unmanned companion fighter aircraft, assisting (currently) piloted fighter. 2. Totally automatic (but supervised) eVTOL air taxis' along with auto-ATC.
This conversation was amazing!Its the most clear explanation of what AI really is, that I've heard up to know. I'm gonna be sharing this video with a lot of people! Peter, Im a huge fan of your utube channels! You too have a gift of laying complicated topics down in clear digestible terms! Thank you!!👏🏼👏🏼🙏🏼
I think AI could have a role in training situations. Imagine sitting down at your computer and having a discussion with Brilliant AI on any topic. Airlines could have AI instructors that help staff learn different things. I know I learn better by hearing things explained. But it's not always convenient to go to a scheduled class. If a class has an AI instructor then I could take that course at any time anywhere.
I have seen simulations that showed that Sully could have made it back in time if he went right away. This isn’t a knock on him, he was following the procedures laid out and did a fantastic job. The goal for AI should be the ability or diagnose the problem right way and give guidance to the pilots to help them out.
Petter, this was one of the most interesting vids you’ve done! You’re one of the good folks. Your intellectual curiosity, along with with your commitment to truth is what this works needs. It appears Marco is another such fellow. AI is a potentially alarming technology, but it’s reassuring understanding its limitations. 👍
I tried Marco's little experiment, but instead of asking for an airplane without front windows, I asked for an airplane without a windshield and got 3 out 7 pictures that filled that criteria.
@CaptHollister Could you share with me the link to the image generated? I tried 10 pictures with your prompt and still was unable to get the result intended. Having said so, and as stated in the video, I fully expect it to soon be able to generate the expected result as more data is fed to it.
In the 1960's my mother was a ward clerk in the hospital. It was her job to enter notes into patient records and to take those records to where the records were stored. Hospitals no longer empoy such people, it's all done on computers now. In fact, before the 1960's "computer" was a job, not a machine.
I work in AI and can confirm that Marco really knows what he's talking about. Still, I feel a few possibilities were missed. One of the good qualities of computers and AI is that they don't get bored, or nervous. What about getting AI to perform the "pilot monitoring" function? We wouldn't want it to make decisions but it certainly could watch everything the pilot and plane are doing and make comments. What about using it in a pilot's walkaround? Unlike a pilot who is perhaps inattentive after making hundreds of uneventful walkarounds, the AI would catch anything out of place. Finally, AI won't take many jobs any time soon but elevator operator is a really bad example. It is highly unlikely that many operators were retrained as elevator maintenance people. Even if they were, a couple of elevator maintenance people can service hundreds of elevators. Any time technology makes a lot of changes to the world, some people will lose their jobs but new jobs will be created. The emphasis should be on learning new things and not counting on keeping the same job for life.
Yes, that's true, they don't get bored or nervous, they don't care whether they or the passengers survive in an emergency either. I think AI would be a useful helping tool for complicated stuff, including aviation. It has in a way been used in many fields in the past with good results, as well as in aviation. So, I'd guess it will still be used in the same manner the more tech improves. But no, it won't ever fly planes on its own. It might take over simplistic or procedural jobs, piloting a plane is neither of those. Procedures were put in place to make complicated stuff easier and safer to manage, when you find yourself out of those procedures or predefined parameters. But yeah, a clerk's job? Taken. Police? Taken. Mil? Taken. Any job that requires no thinking and just simplistically following procedures is going to be taken by AI. Just like those no thinking brainless factory jobs were taken by industrial automation. Scientists are safe, pilots are safe, teachers are safe, and despite the advances in AI art creation, artists are safe as well. AI could help as a tool in all those fields, but it cannot take them over cause it's garbage in all of them. And yes I agree with you, that it will help people learn new things, have more time to spare, etc. Although the power structure of the world will resist, and they depend their powers on others blindly obeying. So whether we'll get AI to help us improve our lives, or whether they first use it against us, remains to be seen.
Perhaps if Ai was a check pilot. How many investigators have you shown where the crew was disoriented or things changed abruptly and the crew is inputting conflicting controls. Then someone would say a cryptic word instead of “hey, you guys need to put the nose down as the plane is going to stall “. Ai can bypass emotions, fear of being wrong or embarrassed to say what you mean because the captain is so experienced. Perhaps Ai, who has been watching the flight can verbally suggest solutions. You rely on buzzers or a light that says on pilot is pulling up while the other down. The check pilot Ai could step in when the human is frozen due to information bias or tunnel vision. We all understand it happens, but hindsight is 20/20, so let’s get a set of eyes to challenge a potential set of situations and speak up. This is one application that can help in my opinion. Perhaps it can even help with traffic control.
I think this is finally a realistic view of AI. Why do all the other 'AI experts' and their millionaire/billionaire investors simply go mad and illogical when discussing AI... all hand movements and all..
Great video, Whilst, it is suggested, the AI cannot fly the aircraft, there seems to be a role for an overarching AI to monitor the flight and flag up if the flight is operating outside the expected parameters and flag to the pilots (and to the ground) that there is something “weird” about the flight. There have been a number of accidents where pilots have not picked up that they/their aircraft are not doing what would be normal.
To try out my new AI app here!👉🏻 app.mentourpilot.com
You can also contact Marco directly if you have a serious idea you would like his help developing cvt.ai/mentour
😊😊😊
About AI, this is not a "if" question, but just "when", and this day is coming fast. I would say, no more than twenty years. That is, for commercial aviation, because autonomous UAMs will cover the sky way before that. Those protocols are already in development for years.
@@forgotten_world I think there is still an "If" element, which centres around whether AI is the best solution towards automated piloting.
Of course we are headed towards more automation in commercial aviation, but AI isn't necessarily the best solution for pilot replacement, in an industry with the framework that aviation has. We may be better off expanding automation technologies that work on defined processes, in the same way autopilot and autoland do.
There is no reason why we cannot have an aircraft automated from the ramp of its departure airport, to the ramp of the destination airport, and never use an AI system.
There is a lot of focus on AI at the moment, and it is a fantastic field that will doubtlessly become more prominent in society. But we need to use the right tools for each job. And automation without AI is probably far more suited to flying aircraft, at the very least until we can get AI to spontaneously consider scenarios, but even then AI probably isn't going to be the best solution.
Where AI is perhaps going to be better suited, is in the role of traffic control. I'd wager you'll see an AI driven ATC long before you see an AI pilot (in commercial use, rather than in research, development, and testing programmes), of we ever see AI used for commercial piloting.
The idea you suggested for the plane to offer prompts where the pilot is doing something strange and pushing the plane towards an unsafe envelope doesn't require AI. As a computer programmer I'd say this would be an expansion of the scope of the existing hard coded envelope protection system with a second layer or soft mitigation (Soft in this regard as they are recommendation as vs the hard mitigations where the aircraft physically intervenes). For example say the pilot is experiencing a somatogravic illusion and forcing the nose up failing to notice the plane is actually slowing quickly. Perhaps instead of waiting until the hard envelope protection system kicks in a RED CLIMB message shows up on the ECAM perhaps accompanied by an aural "REDUCE CLIMB" alert. I feel like that wording also has the added bonus that it calls out what is the immediate problem that the pilots are likely missing here, as a pilot even if I doubted this it would call my attention to the ADI, Altimeter, VSI, and ASI all of which would confirm the yup you are actually dangerously nose high, climbing and getting dangerously slow. After all if I didn't believe I was climbing the first three will disburse me of that notion and the latter is going to indicate this is becoming a serious issue. So that little prompt could help break the pilot out of the confusion by giving and instruction that also calls attention to the right instruments that for whatever reason they are somehow just not seeing at this critical moment.
Länka flightradar24 i appen 👍
I did not expect the most level headed take on ai coming from an aviation channel. Truly well done!
IKR!!!! Most posts on linkedin have sounded more doomed, as if AI is already so good it can take over everything or as if AI would revolutionize work the way MS Office did. But I think there might be a difference in which sector they are discussing, as the managerial people on linkedin tend to be more vocal. Aiden here is presenting AI for engineering/pilots which is a different use case than generating reports or making presentations.
Aviation has been dealing with automation far before mainstream tech. Heck neural networks in aviation controls research was a thing by the 90s.
Well, you have to be level headed to level a plane. Sorry, didn't manage to slip a dad joke in the interview so I figured I'd do it in the comments. 😅
Dad jokes are always appreciated!
I did 😊
I like this guy. Saying "this could be done in AI, but there may be a better conventional way" is actual engineering instead of cultism.
As far as AI goes for advance informing a nervous flyer about turbulence, all a nervous flyer really needs is false reassurance from someone they trust. For my first flight on a commercial airline as a young teenager, my mother told me that it would be lots of fun - sort of like riding a roller coaster at times with ups and downs. The flight turned out to be what the other more experienced passengers told me later was a nightmare with lots and lots of turbulence. There I was smiling and enjoying myself the entire time because it met up with my expectations.
That’s a very conscious mum! Excellent example
One of the reasons I always flew United was to listen to channel 9. Ride reports were extremely valuable insight on what to expect and the professionalism of the crews and ATC were incredibly reassuring.
Through the 90's I did quite a bit of flying on commercial airlines... AND from time to time we'd hit "pretty good" turbulence... I knew there were likely timid, nervous, or outright scared flyers in the cabin, so every time there was a "proper" bit of roller-coaster or similar effect, I always made a point to give a good "WHEEE!" and laugh... in two or three of them, there'd usually be a few others to join me, and more often than not, even some of the crew...
I doubt anyone knew I was even looking around, but I saw more than a few shoulders start to slouch, faces slack from distentions and clenched jaws, and even a couple parents pointing at us "lunatics" and speaking to youngsters... I like to hope it was a boost in morale to a few of the more anxious among us...
Obviously, I'd temper that with the sensible judgment not to be shouting when the lights are out and everyone's even trying to get some shut-eye or rest... Even (especially?) the anxiety-prone don't need a maniac (or several) shouting anything when the plane starts buffeting and bouncing... haha... ;o)
As a child, I absolutely loved turbulence. Not so much now despite being able to pilot fixed wing aircraft. Go figure.
I find it eerie when the flight is very smooth. I actually enjoy the subtle turbulence along with some more bumpy turbulence at times.
I don’t know if the false reassurance is a good idea though. If the person ends up panicking then it could create trust issue.
I think having ai as a second set of eyes for stuff like checking if pitot tubes are covered would be useful for pilots, or alerting pilots if the readings of sensors dont match
Yes! Exactly. It will be useful as a tool
i think for the checklists they could use a non-ai search engine that can search through them or even a voice assistant type search like alexa/siri
...but you don't need AI for that, you can do that with deterministic procedural algorithms. the hype for AI is far too high, imo
There is really no AI necessary to find out if values delivered from redundant sensors are inconsistent
But informing the pilots that information is inconsistent leaves the pilots with the problem which one they should trust and which not - which causes a lot of stress and confusion.
I think something I call "sensor fusion" would help in such situations a lot.
For example one altimeter says that you are on flying at 20.000 feet while the other says that your are only flying on 1500 feet is a bad thing when flying over the pacific at night without any visual helpers.
But including data from other sensors (like outside temperature or GPS) would provide a lot of hints which of the both altimeters is the one delivering wrong values.
If the outside temperature is -10°C and GPS tells you that you are flying at 2300 feet than it is nearly 100% sure that the altimeter showing 20.000 feet is wrong and you should better trust the other altimeter.
Or if one pitot sensor measures a dramatic change in relative air speed (for example a drop from 800 km/h to less than 400 km/h in less than one second) while the other one is delivering "smoother" values than the propability that the first pitot was blocked by ice or something else is high and in doubt it is better to trust the other pitot sensor.
Clever guys could take all time they need to develop algorithms using data from the different sensors to calculate how trustable each individuel sensor is and inform the pilots which sensors they should trust in case they not have any better idea.
For humans it is very difficult to perform such considerations in a stressfull situation with limitted time to make a decision but a software could do such (and even much more complex) analysis in the fraction of a second - without any AI needed...
@@muenstercheese Most of the things AI does are possible via other means. The whole point is that Generalized AI (or something close to it, like ChatGPT) makes those things orders of magnitude easier.
Even if accidents were “drastically” reduced (which would be incredibly hard to do, given that aviation is extremely safety conscious now), all it’ll take is one crash, and people will be screaming for pilots to be back in the seats. The MAX crashes taught us that (yes, I know there were pilots, but they couldn’t override the automation).
That might happen - or not.
Assume that 25 % of airlines are removing their pilot by some software and after 2 years the statistics would show that they have 90% less accidents than airlines flying with human pilots
when even a single accident may not change the mind of passengers to scream for pilots as long as the "total picture" shows clearly that it is safer to rely on a "software pilot".
The MAX crashes were (in my opinion) caused by human errors - not errors done by the human pilots but errors from the guys at Boeing building a software relying on a single (Angle of attach) even they were aware that sensor could fail.
The minimum requirement would have been to compare the input of multiple AOT sensors and if they are not consistent to turn of MCAS and of course inform the pilots how they should react in such a case.
They could override it. They just didn't.
@@endefael There were two crashes related to MCAS
In the first one the cockpit crew was not informed about MCAS and therefore could not override it.
In the second one the crew did know about MCAS and how to "override" it by manual turning the trim wheels - but this was physical hard for the pilots to do and also take a lot of time - in the second accident it took to much time.
@@elkeospert9188 I am afraid you do not understand completely what the MCAS is capable of, what that failure looks like, what are all the contributing factors, and the huge deal training played in both accidents. Just to give you a hint, in none of the two crashes the crew performed all 4 or 5 basic memory items they were supposed to. I highly encourage you to study them more deeply, and you will see that automation did not have the capability of overriding them by itself. It just took them too long to take the appropriate actions in due time - if taken at all. Not judging them, as individuals, but the fact that they were exposed to it without being fully prepared to. Not saying that the acft couldn't have been improved either, which any human project, including the 737 can. But it was never simple as the MCAS existence or failure: no accident happens for an isolated fact.
You could absolutely use narrow AI to fly a plane if you input enough instructions. The problem is like driverless cars is that it would make an inexplicable decision that would kill everyone. A plane can already take off and land on auto pilot as it is. You just don't want it to make decisions in dangerous situations, which is why they also eventually banned driverless cars.
As someone who works with AI on a daily basis, this basically hits the nail on the head. AI did not replace anyone in my team, instead it took over the mundane repetitive work which is the longest part of a project, freeing up my team to focus on the final finishing portions of the deliverable. AI does 80-85% of the work in less than half the time, making my team more efficient and allowing us to take on more projects with the same staff. We refer to it as AI/Human hybrid where the AI is more of a partner to a human rather than a replacement.
Exactly our point! Thank you, feel free to give some feedback on our app!
"allowing us to take on more projects with the same staff" So there are fewer projects for other people to work on. If you do more work with the same people then couldn't you do the same work (not more) with less people?
@@avisparc I was going to say the same thing as well. By reducing number of man hours needed for a given project it has indirectly decreased employment. However, this will mean significantly reduced costs which might allow more demand. This will make the task of deciphering the changes in employment due to AI a bit more complex.
@@oadka that's a good point, it hadn't occurred to me that the Jevons paradox could operate in this situation. (en.wikipedia.org/wiki/Jevons_paradox)
I am very fascinated by the growth of AI but this video is my favorite explanations of AI so far. Bravo Petter and Marco!
Thank you!
Except it doesn't explain AI, it explains what a semantic database and Generative Pre-trained (the GPT in ChatGPT) is. What we currently have is still not even A.I., is just the interface for a semantic database.
Indeed, I've been working in AI for years too, and I really appreciated Marco's intervention. There is much buzz around AI, and it's pleasant to see people like Marco clearly putting dots on the i-s
@@MrCaiobrz yes what we currently have is narrow AI what your taking about is AGI (artificial general intelligence)
as for AI taking over control of the plane, there is ONE situation i can think of, if you remember the disaster with the plane that flew in a straight line for a while before crashing near greece with both pilots out, introduce something like the check in trains where the pilot have to confirm 'yes i am still awake and paying attention' if a pilot misses for instance two in a row, it could make sense for the 'ai' pilot to bring the plane to a low altitude and broadcast a automated mayday, and then if still no pilot response it can make sense to then try and land the plane.
so basically as absolute last ditch if there's nobody qualified in the plane to land the plane...
@@kopazwashere yup. the 'alternative' are of course a remote pilot alá drones which argueably are a better solution, but a backup-for-the-backup
You do not need "AI" specifically for a last ditch solution like that. Garmin already offers a solution called Autoland for GA aircrafts where in a case of pilot incapacitation a passenger can press the emergency button and the plane will land by itself at the nearest suitable airport while making the proper mayday calls to atc. It even displays instructions to the passengers if they need to talk to atc
ah so something like a dead man's switch?
@@flyfloat That's fine if there is someone conscious to press the button. With Helios 522 and the Payne Stewart Learjet crash no one was conscious to do that. They are not the only examples either. As a last resort AI could have been very helpful to initiate autoland in these 100% fatal accidents.
I love this channel! So excited when I see a new video in my feed.
I’ve recently been on my 3rd binge rewatching all of Mentour Pilot’s videos the past few weeks. Thanks so much for the fascinating entertainment and information!
Thank YOU so much for being an awesome fan and supporting what I do!
Excellent interview, thank you. I would be interested in more videos like this one.
Thank you!
Unfortunately you seem to be quite alone thinking that 😔 The video is tanking
Nooo, we want more of that!!! Greetings from Germany 👏🏽
@@MentourNow I'm fascinated by this video! I'm going to share it on Facebook and Twitter.
Aidan is so cool, really gives polite, coherent and useful answers! 😎👍🏻
Thank you! Glad you like him
I think this is completely wrong. If you approach the issue from the standpoint of what information is available to support AI, the clear answer is yes, even in Sully's situation.
The autopilot already flys the airplane. Altitude and heading can be changed by turning knobs. The computer can turn those just as trim is changed. Add Jeppsen charts, digital RADAR input and ground controllers replaced by AI to 'talk' to the AI pilot, all thr information for safe flight is there.
For the Hudson river, again, consider the information available. Engines failing, altitude, heading, distance back to Laguardia are all known. AI would know it had to land, but where? A new problem for AI, but again, consider the information available. To me, searching for clear fields, beaches, etc. are a common enough problem to be part of AI from the start. GOOGLE MAPS has that information now.
My background is computer tech. Final thought; I served in the Air Force during the war in Vietnam. The Hughes Mark 1 fire control system in the F106, could almost fly a complete shootdown mission, and that was 70 years ago.
AI replacing pilots and ground controllers is a lot closer than you think, and I'm not happy about that.
Yes, excellent choice of guest.
Thank you, thank you, thank you Mentor for bringing someone on like Marco who gets it.
I studied AI at University in the 80's and I am very much a Turing purist. I have worked with systems that I would categorise as 'advanced analytics' and 'machine learning' and have had rows with people who said that Turing's opinions are out of date after I accused them of re-badging their tired old software as AI to charge more money (which they are).
Back on topic, who is most scared of of the current form of AI? The mainstream media are. Why? Because AI will start to present unbiased news feeds and put them out of work.The vast majority of the negative press is being generated by the press.
Thank you for your thoughtful comment and I’m glad you liked the video.
@@MentourNow It was, as you say, fantastic 🤣
Don't worry, in the media you can create AI being politically correct too. It's a matter of rule settings to follow.
It's not going to become better, but even worse and far more refined.
😉
That was the best and most honest AI discussion I have seen to date, I get so fed up with people banging on about how AI will damage the job market - they often get upset when I point out AI cannot sweep streets or fill shelves in Tesco so they will always have a job - but seriously, AI is a database algorythm and nothing more, Marco explains that so well and from an inside perspective people need to take note. I will be sharing this video on Facebook and LinkedIn because this discussion needs to be heard by millions so they understand what AI can do, but more importantly, what AI cannot do.
Thanks for a briiliant interview.
Brilliant. You addressed a question, (very comprehensively), that many have been pondering. Thanks.
Having the opportunity to participate in this interview with @MentourNow was an absolute honour and pleasure. I am both impressed and grateful for the amazing comments, perspectives, questions, and debates I see here in the comments. They truly are a testament to the quality of this community. I am especially thankful to those who have provided feedback and pointed out areas that I intentionally did not elaborate on during the interview, as well as suggesting improvements to my phrasing. I agree with those highlighting that I shared a somewhat oversimplified version of the subject matter, as I briefly mentioned at the start of the video. This was done intentionally to make the conversation accessible to as broad an audience as possible. However, for those wishing to delve into the nitty-gritty details, I would be more than happy to elaborate in a thread following this message. I will be tagging the most intriguing comments, but everyone is free to join in.
Elevator Operator:
From the point of view that I shared in the interview you can argue that its the evolution of a job. However, as you rightfully pointed out, there is another point of view which says that few to no people today have a job title "Elevator Operator". AI & technology can most definitely and has made certain specific Job Titles redundant. But if we elaborate on that perspective lets perhaps dive deeper into what did it actually make redundant. It took over repetitive and potentially unfulfilling jobs making it so that people who previously might have considered becoming an elevator operator now need to consider becoming an elevator engineer. If we take a look at the fluctuations of employment rates throughout the entire 1900s where technology evolved and automated more things than ever before we will notice that not only did it not go in a growing trend, but the quality of life of everyone in the world has steadily increased. The perspective I'm sharing is that automation has been only good for mankind and there is no reason to believe that it will change. After all it is us who chose to create it, we are the creators of technology.
Unmanned Aircraft Topic.
As many of you have pointed out, there have been unmanned aircraft (such as drones) that have successfully been deployed. These however, are not flown by AI. Some of them might incorporate some elements of AI such as Face Recognition and many more. However, they are operated and dependent on code instructions which, as I pointed out in the video, are much more efficient and reliable for this purpose. Hence the answer to the question "Can AI fly a plane on its own" remains No. However whether automation, especially if properly leveraging AI, will be able to do that is a different question. One that I would still answer no to today but with less certainty than if it were only with AI.
Generative AI vs Other AI.
One of my primary emphasis for this interview was to address the fear-mongering misconceptions that have been irresponsibly spread by the media and that have, for the most part, been centered around Generative AI. Unlike previous breakthroughs in AI, ChatGPT became an overnight sensation and a lot more people have heard about it than any other AI breakthrough. Now, I stated that AI, in an oversimplified way, could be described as fake it till you make it. And I will stand my ground on this one by further elaborating it. When I chose to use the sentence "fake it till you make it" I explicitly did so in an attempt to translate one of the core principles of all ML models: Approximation. One of the incredible things of many ML models is how you can generate a desired output from an input of many functions without needing to know any of the logic inside the function. This is, a foundation of AI/ML. And it is a principle used in just about all types of models, from Classifiers, Regression and Neural Networks to Transformers, Entity Extraction, and many more. I believe that so long as any AI that we develop is driven by this approximation principle we will simply not achieve anything other than Narrow AI. And most definitely we will not achieve consciousness.
Thanks for having you on Marco! It was truly illuminating!
@@yamminemarco Some AI models try to simulate various biological processes. Like how the brain works, like how a hive of ants work. Approximation also happens in our won natural intelligence. But even the perfect emulation of human brain/mind would have no advantage over the human brain/mind. And this is why I don't see general purpose AI. And copying the weakness of human mind, and creating an "inferior human emulator" in the cockpit wouldn't be the best option either.
Even if GPT is designed in a way that would give it the best chance to beat the turing test, even with very large databases, lots of data it is bad at many tasks. Including procedural generation of game content even for tabletop gaming. We can get back to this point and my experiences with using GPT for this purpose, but the issue is what you described: It tries to use the "best option" one step at a time, it doesn't even consider its own generated answer as a whole. And it often doesn't understand which criteria from the prompt is relevant and important. But I think that ChatGPT and MidJourney isn't suitable for production environment yet, but trying them while gaming, learning prompt engineering, evaluating these options is a much better option.
Your claim is that Midjourney doesn't see airplanes without windows. But some large High Altitude High Endurance fixed wing drones are in essence airplanes without windows. But Midjourney (and its language model) doesn't understand that fixed wing UAVs are a variant of airplanes, and how it can use the information. Please check the following futuristic image: cdn.midjourney.com/618fa0bb-c4b4-453d-b2ac-d5dc9850e007/0_2.webp
You tried to engineer a prompt to render the plane without front windows / windscreens, I have tried a different approach and my prompt engineering resulted in picture with a plane, with two jet engines, but no windows, no windscreens. No new, better trained model, and my approach still worked. Creating variants, using the blend command, using the describe commands to help with even better prompts... I am sure with enough prompt engineering we would get far better results.
Approximation isn't an issue because it is used by natural intelligence as well, and ability to approximate the results of any "unlikely" option is important when we want to invent new solutions. Approximation is the shortcut that makes HUMAN intelligence possible.
So, when I used MidJourney I seen how it uses some random noise, and try to turn it more and more into an image we want to see. We have multi-prompts, can prioritize the importance of prompts. If in addition of "front windows", I would also mention the word "Windscreen" and in priority I give them a -2 or -500 or just no... It was easy to use more and more prompt engineering for better results. But due to economic crisis I don't have money to finance a lot of experiments with MidJourney, but I think discussing prompt engineering here would make sense.
But when I started to learn about AI it started with an extremely simple algorithm, yes it was the minmax algorithm and it has plenty of derivatives. It would use plenty of resources and it should be optimized by prioritizing which options should be checked first, we would need to make sure once if found a "good enough" solution it wouldn't waste endless amount of resources on other things, if an option would be a dead end, it should move to the next one.
So, if a machine learning algorithm can approximate the results of some potential actions, it is well trained and reasonably accurate it can quickly identify options that shouldn't be considered by the minmax option or should be only considered only as last result. Minmax and its derivatives can think "several steps ahead" and this is how they would choose the best options. We would have different kinds of algorithms (some of them with machine learning, etc) to evaluate various scenarios, etc.
As a season software engineer I can say that *some* aspects of the flying will be supplemented by AI. The autopilot, auto landing, technical malfunctions checklist items, memory items, all make perfect sense. Beyond that I do agree with your assessment of a fully automated cockpit being a generation away.
I agree, It can help with workload. the checklist. monitoring ground and air traffic. emergencies etc
Yep, that’s where we will see it first.
Exactly!
It can help us drive our cars, busses, trucks safer & more efficiently.
Sort of like how autopilot works
This is most accurate description of AI I've sever heard. Great 🧡
Not true. It's a description of Generative AI. Not AI.
@@NicolasChaillan but the analogy applies to any AI. What you don't understand is that, humans are exceptional in their "capabilities". Whatever that means
@@Oyzatt Clearly you don't understand what the technology is capable off and what we have already done at the U.S. Air Force and Space Force. I was the Chief Software Officer for 3 years.
@@NicolasChaillan let's crystallized everything here for the sake of clarity . Ai is not creative like humans, it's simply regeneration what has been feed in it, that clearly shows it boundaries. In the military space it can be capable of many things but not cognitive stuffs, if you'll agree with
@@Oyzatt wrong. That's generative AI. Not AI as a whole.
One of the best uploads ever Peter 👏🏻
Getting cross-Industry insights associated with Aviation is fantastic!. Would be valuable to learn, watch such discussions/exchanges. Amazing break-through Peter!. Congratulations.
Warm greeting from Germany. Vielen Dank!
This was a wonderful primer on AI.
Aviation specific but useful for anyone questioning what ML means for the future.
Thank you Petter and Marco
Masters Degree in Computer Science here, with more than 20 years experience. This is the best explanation of AI that I have seen anywhere! Well done!
Artificial intelligence is not and never will be. This was so good I shared it in my LinkedIn stream because every other post is some BS AI-related fever dream.
Thank you! Feel free to tag me in the post - Petter Hörnfeldt
People have become too dumb to understand this simple concept. They think of AI as God.
What do you think is so un-replicatable about a biological brain?
Physician here.
That phrase "AI is not and never will be" is not anything other than wishful thinking. Anyone can say anything.
Finally an intelligent discussion on current AI. Really didn't expect it on an aviation channel. Thanks!
Computers running normal code (not AI) will be more useful because of the predictability of its output. AI can be useful as a user interface (voice recognition) and general awareness. One example would be an AI that listens to all the ground control transmissions so it is aware that you are supposed to turn on a certain taxiway and reminds you if you are starting to turn at the wrong one, or if you are about to cross an active runway but another plane has been cleared to take off or land on it by a different controller. Miracle on the Hudson is an excellent example why I want a human pilot flying any plane I am in.
Another aspect of programmed automation is that you can tell the pilots what the parameters are for "decisions" made by the automation so that they understand exactly when the automation is outside of its parameters and thus have more information about whether to trust it or not in a particular situation.
I work in tech with ML/AI at various points over the years. Agreed with the assessment here, it’s really nothing to freak out about. The reason people care is because there are a lot of extremely wealthy “entrepreneurs” who want to use it to make money even easier than they do now. LLMs like chatgpt will have their uses, but it is not the revolution everyone is afraid of.
Hopefully.
I frankly disagree, it is an amazing enabler capable of revolutional acceleration in productivity and value generation, both in productive and recreational activities. It is the new steam engine of XXI century. On the other hand I cannot say anything about dangerous outcomes. There are potential problematic scenarios indeed as described in Bostrom book superintelligence
10:00 the discussion about the feedback loop and ability to identify weak points and source of errors was one of the eye openers of the essay "they write the right stuff", on the US Shuttle Software Group (pretty much the only component of the Shuttle program which got praise after the Challenger Accident), the shuttle group's engineering was set up such that *the process* had the responsibility for errors, in order to best make use of human creativity and ingenuity without having to suffer from its foibles. Any bug which made it through the process was considered an issue with the process, and thus something to evaluate and fix in that context.
Absolutely fantastic video
Glad you liked it.
Love Mentour videos and they are always very well documented. The guest in this one seems to me not quite as an expert as I hopped. He is just stating the oversimplified view of AI that seems to flood internet these days. Here you have a few comments if I may:
* AI is not just ChatGPT. GPT architecture is just one of many (BERT, GANS, etc.). Many of these are not that visible as ChatGPT but we have been already affected by AI at large scale (Google translate, UA-cam suggestions, voice recognition, etc.)
* AI is not just a database system. In the process of deep learning the neural networks are able to isolate meaning (see sentence emebeddings, convolutional neural networks, etc.). AI is able to cluster information and use it for reasoning and I can give you many examples. GPT does not only generate next word based on previously generated words but it is also driven by the meaning constructed in the learning process. Actually it does not even generates words. It generates tokens ("sub words"). It is not a probabilistic system.
* AI could land an airplane without any problems if trained so. Full self driving AI for cars is far more complex problem and it is amazing what the current AI systems can do (Tesla, Mercedes, etc.). But as somebody said, the first rule of AI is: if you can solve a problem without AI then do it that way. AI is not very efficient at training. Currently we can fly airplanes without pilots without using AI (autopilot, auto-landing systems, etc.). On the other hand, replacing pilots completely will not happen any time soon even for the simple reason that people will not board a plane any time soon without a pilot. But it is creeping in. As it was mentioned in a previous videos the industry is moving from two pilots to one pilot.
* AI will replace jobs (and it will create new ones). One example is customer support with all the robots that answer your questions. What do you think Med PaLM-2 will do?
... ;-)
One thing I agree with the guest. AI is an enabler for new opportunities. Also, good idea to bring aviation in the AI discussion.
You bring up very valid points. I've created a main comment and thread where I've elaborated a little more and you are welcome to join in.
Very cool and insightful material guys 😉 thank you
Wow, what a knowledgeable and well articulated guest. He really understands the subject matter.
Thank you both.
Very cool! I like this format very much. Petter discussing with other passionate people about not explicitly aviation related topics. Very nice video.
Awesome vid! Airbus already uses machine learning (the basis of AI) in order to better engineering and airline fleet operations. They work together with a company called Palantir and they created "Skywise" for this
Really interesting video, thanks Petter and Marco.
This channel is on another level. This is kind of content we need (and want as well 😄) . Thank you so much Petter and Marco. This was truly informative.
Excellent video, thanks for bringing a real expert on the subject. As a mathematician working as a software engineer I am so happy to see a voice of reason in talking about what we call AI. Don’t underestimate automation though, I am mind blown by Garmin autoland, I think that we might see similar automation systems in commercial aviation at some point, so I wouldn’t rule out a single pilot operations at some point in the future.
As a pilot, I'm happy to hear my job is safe! (For now)
Yep! And for a while to come I would say.
@@MentourNow I hope it's a long enough while. I'm 15 now and want to become an airline pilot in the very near future.
I got completely sidetracked by how beautiful the AI Van Gogh airplanes were.
Ok again a marvelous video. Thank you so much
Even if it would fly the plane perfectly 99.99% of the times it would be devastating the 0.01% of the times it fails due to the weird hallucinations AI sometimes have.
Kind of like the weird hallucinations humans sometimes have?
Airplanes spend most of the time flying at high altitudes. They often have minutes to correct the problem. The 0.01% is not necessarily or likely to be devastating. Closer to the ground of course you are correct.
Well, that's the tension here. If it fails to save United 232 and Sully's flight, but it doesn't crash in AF447, Lion Air and Ethiopian MCAS accidents, Colgan, AA587 at New York, Helios hypoxia, Tenerife, PIA gear up go-around, AA965 CFIT at Cali, TAM 3045 at Conghonas, and a long list of accidents that were either caused by the pilot (due to distraction, confusion, spatial disorientation, disregard of procedures, fatigue, etc...) or that were not avoided by the pilot when the pilot could have avoided it just by following procedures (MCAS accidents, AF447...), is it worth the price?
@@playmaka2007 right? This channel is full of examples of pilots hallucinating, especially in stressful(high work load) situations, something AI doesn't ever have to deal with since it can't stress.
0.01% is still better than the 0.5% human errors.
GPT models contain a world model- they are capable of performing calculations and keeping track of state. They can play games they've never seen before. It's not as simple as just accessing memories. It's accessing abstract concepts and using a predictive model it has developed thay can reliably predict information, and the only way to do that is to actually process the information. I.e., it's not overfit. It can actually perform reasoning. It does have abstract ideas about what things mean. However, its entire world is text, as seen through the lens of its training data, so of course it currently has limitations.
Excellent interview with Marco. Very informative, objective, level-headed, practical. Always great content and awesome job! 🙏
Great information and perspective on AI. Definitely a ton of Fear mongering going on and the analogy "fake it till you make it" truly makes sense!
@Windows XP The news media and others exploiting AI tech by saying "AI is taking your job" or "AI will control everything" when in reality, as explained in this video, AI really can't take control of anything...just yet.
Thank you very much for this really informative interview which clarifies what AI is and can do - and what AI is not and cannot do!👍 That´s a core point of knowledge, not only in the Aviation Business.
AI can help pilots in automation of routine operations - like routine cabin announcements, early warning of turbulence etc, verify and implement various checklists. Send automated communications to control tower and vice versa, etc there is no way AI will not be integrated in a cockpit in near future
I can understand why you partnered with Marcus for this because he explains AI in a simplistic way like you explain aviation related information.
There was a good point made: an AI pilot assistant must have a way to strongly signal to the pilots whether it makes a helpful suggestion that the pilot may overrule or whether it does an emergency interjection that the pilot must simply have faith in. Like when the TCAS wants to step in to avoid an imminent collision.
Hej!!! The best discussion 👌 Thanks Petter!
Glad you liked it!
In light of one of your recent videos, I just realized AI might be very useful to parse the relevant parts of NOTAMs, and maybe even remind the pilots as the flight progresses.
I'm an AI researcher, and I always struggle to explain why we are not talking about sentience, but basically big prediction machines. Marco did a great job there! Thanks for bringing an actual expert :)
Being a data scientist and aviation enthusiast, A situation such as Sully can definitely be implemented in AI in the form of recommendation system or disaster managing co-pilot system where system can quickly identify dual engine failure and determine shortest route available to nearest airstrip. This however requires intensive training on large simulation datasets and would involve multiple countries across the world. The model inference also would require extremely powerful computers on board to process such large streams of data quickly which might shoot up cost of airplane. SO, theoretically it is possible but practical implementation likely wouldn't be possible anytime soon.
Agreed.
Indeed one of the most non-artificially intelligent discussions on the topic. Thank you.
I love these videos while im studying for a levels!
Great video. I learned a lot about AI in general. Well done for tackling this topic.
Someone honest talking about "AI", for a change :) thank you
You are welcome. That’s what we are here for
8:42 I don’t agree that it doesn’t have accountability. You can tell it that this areas of the ground is “people” and this airplane is “people”, if “people” suffer than you “lost the game” - it will then analyze all the available variations to save people, it has “accountability” in real-time, you program it.
Also, I don’t agree that AI can’t stop - giving ChatGPT as an example is just wrong/uninformed. You get the probabilities and you can set a threshold, if it is insecure about something, you can pass it back to the pilot, just like regular Autopilot just using probabilities and gathered knowledge.
I think you’re confusing conscience with accountability. Think of it more as in legal liability, and being able to explain why it took certain decisions based on actual understanding of inputs and systems at play. Until the developers of “AI” models can consistently and repeatably fulfill that condition, you will be hard pressed to see companies, manufacturers and any other entity who may be under legal liability to allow implementation of “ai” as it is today in direct flight operations.
This was a great discussion on the whole "AI' marketing going on right now for something that isn't even really AI. Speaking as someone who is in tech field, current AI is nothing more than a messy, giant, computer program that must always answer your question, it doesn't even have to be a truthful answer.
Really fascinating and informative discussion!
Thank you for the post, my teenager is interested in becoming a pilot so I’m grateful for your opinion.
It would be boring for a pilot if the co-pilot is eliminated, humans need people to talk to at work, especially now that the cockpits are sealed.
This video is spot on and agrees with what most other experts in the field are saying.
One thing that wasn't really touched on this video is the difference between an autopilot and an AI controlling the airplane. When autopilots were created, there was similar concern about pilots being replaced because, as the name suggests, the point of an autopilot is to control the airplane automatically. Autopilots are advanced enough that taxiing around the airport is about the only thing that isn't automated. So, one may be tempted to ask: why doesn't autopilot steal pilots jobs, and what does AI bring to the table that could threaten their jobs?
The answer is that autopilot doesn't have the decision making capabilities necessary to safely fly an aircraft, and the AI of today isn't advanced enough to have that, either, as this excellent video explains. Just because we can automate the mechanics of flying an airplane doesn't mean we can automate the decision making behind why we fly a particular way or follow a certain set of procedures. An autopilot might very well be able to fly an ILS approach more accurately than any human could, but that doesn't mean it understands when it needs to fly an ILS approach or how to set up for one. As the video explains, AI is also incapable of creative thinking, it's only able to take what it has seen before and apply that to the situation. This understanding of why we do things and what makes some action appropriate or not is the crucial element that is missing from these automatic systems, whether they be rule based (autopilots) or machine learning based (AI).
That said, some use cases that AI could be used for, in addition to what the video explains, include improving flight planning and communication with ATC and other pilots.
For flight planning, consider all the NOTAMs, METARs, and so on that pilots have to sift through. It is a lot of information that is usually not formatted in a way that is very human readable, and even when it is, pilots still have to pick out what is important and relevant to their flight. AI could parse through all that information, give pilots a summary of the important information out of it, and even suggest alternate routes if it were paired with weather and traffic prediction models. That could be a way in which ATC is helped out, also: helping choose the routing for flights to maintain traffic separation and expediency.
Of course, any such tool would have to be thought through carefully. Pilots would still need to go through the materials to check that what the AI said is correct, but at least they would have an idea of what to expect which might speed up the process. Still, Mentour has done videos on confirmation bias contributing to accidents, so pilots would need to be trained to use these tools effectively.
Another use of the tool could be in communication. Paired with radio or text based interfaces, these models could assist in translation when non-native English speakers or other languages are being used with ATC, which could improve situational awareness and even clear up miscommunications. Again, care must be taken, since these models could also translate incorrectly, but there are other translation/speech recognition/text to speech tools that could be paired with AI to reduce that risk.
This was a fascinating discussion. As a pilot for a major airline I spent many hours in a simulator preparing to employ procedures learned from generations of pilots. As a technical rep for pilot associations and for my own interest I spent many more hours studying accident and incident reports and hopefully learning from them. I spent many hours in the air seeing how another pilot managed events. Like just about every pilot I spent even more hours, often over a beer, talking about aviation events I had experienced. In those ways I built a base of knowledge that stood me in good stead when I had to deal with events myself. This process, although less structured, resembles the building of a knowledge base on which an AI depends. Certainly one can point to incidents that an AI would find difficulty in handling although I'm not sure that the 'miracle on the Hudson' is one. I can imagine that an AI would have the knowledge that when no other option is available put the aircraft down in any space clear of obstacles within range. The QF32 incident of an uncontained engine failure might be more difficult since the crew there had to ignore some of the computer generated prompts. QF72 would also be unlikely to be fixed by an AI since it involved a system fault combined with an unanticipated software failing.
So I agree that there would be situations that an AI could not resolve. But would they be more than those that pilots don't satisfactorily resolve? Possibly not. It may be that even with current technology the overall safety, measured as number of accidents, would be improved.
However there is another issue. Would passengers fly in an aircraft with no-one up front? I know many people who would not. But I also know people who would choose the flight that was 10% or 20% cheaper.
And of course there are the non-flying events that a pilot is called upon to resolve. I can't see any current AI finding an ingenious way to placate a troublesome passnger. I found that walking through the aircraft after making a PA announcing a long delay was far more effective than the PA alone. Just seeing that the pilot had the confidence to show their face made pax believe what they were told. I regret that some of my ex-colleagues didn't believe this.
Something that does worry me and which is not yet down to AI but is already a problem and would be made worse by AI is skill fade. The less a pilot is called upon to exercise his skills the more likely it is that they will not be good enough when called upon.
They could make the flight 50 or even 90 % cheaper, I wouldn't buy a ticket for a flight that has no pilot and first officer flying it. There is something about this psychological factor of fear. Although we know there are far more people who get involved in car accidents than plane accidents, never the less the fear of flying remains because it is mostly about the knowledge of being powerless to do anything what so ever in a plane when something goes wrong. This is why to trust artificial intelligence to do the job of a human mind would be a step too far for me. I guess it is not about how accurate artificial intelligence may become in the future in recognizing a situation and finding viable solutions to problems, but rather my own sense of trusting a human mind much more, since it works similar to how my mind works. Currently AI is not working the way the human mind works, it only imitates some aspects of the human thinking process.
@@evinnra2779 As I said, I know people who think the way you do, but I also know people for whom the price matters more. If the operators see more profit then they will pitch the price to maximise that and employ whatever marketing they need to.
This generation might struggle to fly with AI, the next generation wont give it a second thought!
Go fly in Asia or Africa ...it's an unpredictable thing , Weather wise and ATC .
An sometimes sub par Local Pilots / Maintenance Standards.
Been there done that many times.
@@giancarlogarlaschi4388 I've flown in Africa and Asia. The unpredictability is no worse than other places. I used to tell my copilots that the one thing you know for sure about a weather forecast is is that it's wrong. It may be a little bit wrong or it may be completely wrong. The lowest fuel I ever landed with was at Heathrow thanks to a surprise. In the USA the weather can be unpredictable. I landed from a nightime Canarsie approach in a snowstorm. The next day we strolled down to the Intrepid in our shirt sleeves.
Such a great interview lots of good knowledge
Thanks for a video without the hype. Backing in the 1980s, I was thinking of getting a PhD in AI. So, I have continued to follow the development of the field. So, kudos for finding an expert in the field who gives it to us straight about what "AI" is and is not.
It's hard to predict whether true AI will ever exist. Nearly 40 years after I left school, we don't seem to be that much closer to building an actual intelligence, There could be a breakthrough today or 100 years from now our descendants will be trying to figure out why we wasted so much time on this dead end. As it usually is in life, what actually happens will be somewhere between those extremes.
It seems like every crash video boils down to, “but the first officer got flustered and didn’t realize that the down stick was in doggy mode and the autopilot had defaulted to protect the kitty mode so it did X”, a computer can be programmed to KNOW all of that and NOT miss the checklist item that needed to be reset to “A” mode, or whatever. For instance, in the “Miracle on the Hudson”, one of the cameras (or some other form of monitoring) that was watching the inlet to the engines would “see” a frickin’ goose get sucked into that engine, would automatically, in seconds, determine if it was too damaged to be restarted (and, if not, try the restarting checklist), know exactly what the flight characteristics of the plane were going to be from then on, realize that returning to the airport was going to be a no go, detect emergency landing options, decide on the Hudson, contact the field, air traffic control, the police and the coastguard simultaneously and instantly while informing the passengers and crew of what was about to happen and maneuvering the plane around for a water landing, making sure to land at the ideal angle to avoid having the plane break up, if, indeed, that was the only option. It is entirely possible that the plane, knowing that the starboard engine was completely out, and knowing EXACTLY where the other air traffic and ground traffic was (and was going to be), would INSTANTLY throw the plane into a maximum bank, while deploying flaps and re-lowering the landing gear and broadcasting appropriate messages and crew and passenger instructions, pulled out of the bank and landed the aircraft back on the field; a feat that no human flight crew could hope to achieve in that amount of time. Or, who knows, since the subsystem of the expert system that is involved with vision wouldn’t have been occupied with everything else that’s involved in getting an airplane into the sky, and could also have a much greater field of view than the pilot it may well have noticed the flock of geese and modified the takeoff profile temporarily to avoid them. I’m a huge fan of pilots, but I will say it again, modern aircraft have too many disparate systems each of which has a billion different (but eminently knowable) states and settings and errors and things that can go wrong with them. It is too complicated for a human pilot and actually needs to be either GREATLY simplified, or completely under the control of a triple redundant computer system, IMHO.
This was sooo informative. I am retired so I haven’t worried about AI taking my job (what, it’s going to sit in my hammock?) and therefore have not paid a lot of attention to it, but now I feel I have a pretty good understanding of what it is and what it isn’t, what it can and cannot do. Thanks, Mentour!
AI may make it so expensive to live that you cannot remain retired.
@@philip6579 that won't happen because of chatgpt and clones, those are simply a slightly more advanced autocorrect.
I really disagree with many claims of your expert, but I will just emphasise one. He said that AI doesn't understand what it is talking about and simply uses things it was trained on to predict the result. Of course he didn't give any arguments to back it up and just said it as a given.
I've got two examples to show that AI can think and understand.
First is the famous move 37 done by alphago. This model was trained on millions of game but hasn't seen such a move because no human has ever done that. So in this case you can't say that it just combines things it saw. It understands how the game works on deeper level than just simple rules of the game.
The second one is example from chatgpt 4
"Imagine that I put a stone in the cup. I covered the cup with a cutting board, then turned the cup upside down with the cutting board and placed it on the table. Then I pulled the board out from under the cup and then picked up the cup and dropped it on the floor. Where is the stone?"
It answed that you pulled the board out from under the cup and the stone probably fell on the table. When you picked up the cup the stone was left on the table.
To answer that problem you have to really understand relation between objects mentioned in the question and to some degree understand physics. I have no idea how one can claim that in this example AI just combined sentences it saw when learning and predicted the next words without understanding anything.
I really like your videos about aviation and learned a lot from them as a hobbyist but I hope that you will also invite experts that disagree with your opinion (and there are a lots of them in the field of AI).
As we intended to make the video relevant to a broader audience we intentionally oversimplified it. I've created a main comment and thread where I've elaborated a little more and you are welcome to join in.
All of the above items you're mentioning are simply trained. They don't prove understanding...
In order to implement AI into flying aircraft I think there has to be a major overhaul and rethink into the entire way of flying and operating the flight deck. Airmanship is almost entirely based on previous experiences. Also, there already exist systems for detecting the wrong runway, wind shear, terrain ahead, etc, so these don’t need to be redeveloped and replaced by AI. As for AI helping in emergencies, a lot of emergencies require quick action and muscle memory from the pilots. To introduce a third “person” in the flight deck might be seen as too much interference. It also risks clouding the judgement of the pilots as they themselves will have reduced situational awareness.
As a programmer myself, I understand where the guest is coming from and I agree with a lot of being said.
However, could an AI model, trained on the entirety of human knowledge to date in aviation, all scenarios and outcomes, become an indispensible crewmember who would notice when CRM is broken and act as a voice of reason, when humans are in an upset? Could it notice from the instrumentation and the inputs that whatever the crew is doing is making the situation worse and start shouting at the captain to reset their head and check the flight directors or something constructive?
In the case of rapid decompression and both pilots incapacitation, could it achieve level flight at a low altitude?
Could it take over the boring tasks of making sure the correct radio frequencies are being used, communicating with the tower and taking some of the predictable workload in the cockpit?
I think it could! I genuinely think AI could make aviation safer, in its current state, if used to its greatest advantage - having access to a lot of things that happened and how to solve them.
Time-critical weather prediction/modelling is mathematically one of the hardest things imaginable. So much data and its like a 4D fluid dynamics show on steroids. With positive and negative feedbacks all going on instantly. AI will be a significant benefit here.
For those going "oh no, improvements in AI will steal our jobs", unless your job was obsolete to begin with, or doesn't require technical experience to improvise, you have nothing to fear. Don't believe me, look at the advent of the calculator, people actually thought that it would make mathematicians obsolete, but a few decades on, and that hasn't happened yet.
That was a facinating video, would love to see another with a deeper dive on some things and more questions.
Great episode
What a fantastic organic conversation. Interesting and fresh perspectives on this topic. Thoroughly enjoyed. Cheers.
Pilot and Truck drivers just made their more effective, but AI will not replace them. Thank for information about AI.
Good to know because I do both of them in the USA
Marco has a great way of explaining things.
My take is that at least in the first gen, it wouldn't make any inputs that are un-commanded by the pilots, instead it can be integrated with sensors and make quick suggestions in emergency situations "And explain right away why he thinks that is the correct solution"
Now the pilot has time to analyze the suggestion and think for himself if the suggestion is valid and or applicable or not.
So no worries about AI "taking over airplanes" but instead I think pilots should welcome the idea so long as it could act as a guardian angel to make suggestions as needed, but never actually take control.
Cheers
Oh, I like how Marco defined AI. This truth is never revealed because they know we will start distrusting and knowing that AI is not actually intelligent (at least not yet). As he said, now I understand that what the AI companies are trying to do is to "FAKE it until [they] make it!"
I am in my mid fifties and when I was a kid I remember growing up you had a telephone that was hooked to the wall and it was a Rotary dial meaning you had to wait for the dial to come around to the number and then return But you had to remember people's phone numbers either by heart or you had them written down which kept You engaged in thinking and remembering things. I realized when I got older I didn't remember as much because it wasn't a necessity. We became dependent on computers to tell us instead of having to remember and look it up in a book or think about it.. I think AI is OK for some things but there's just some jobs that are always gonna need human interaction imo. Great video as always.
I was a controls expert. I was able to automate multiple processes and reduced humans on a shift from 6 or 7 to 2 or 3. We still needed to maintain humans on watch to protect the city when a sensor went down. I could provide alarms for damaged sensors but, couldn’t provide controls for a damaged or misreading sensor because it may fail in thousands of ways.
Fascinating interview. Thank you, Petter.
There's also an element of trust, the majority of people wouldn't trust an ai flying a plane they're on nearly as much as a human pilot, no matter the depth of the autopilot assistance
That'll be true until AI earns the trust of people. But people will also need to learn that each individual AI, even arising from the same source code will be as individual as humans are. The only times an AI would be the same as another is when the algorithm is copied, but otherwise there is no guarantee and even a likelihood that every trained AI will be completely unique and can't be counted on to be like any other as much as you as an individual would be like anyone else on earth or has ever existed.
@@tonysu8860 I don't see how that's a good thing, if we're going to entrust hundreds or thousands of lives to an ai, I'd kinda hope it to be well documented and reliable. I for one will never be happy in an ai piloted plane. My main argument for this is the fact that there a size of program that once you pass it, literally nobody knows about everything it does and how, with bugs, exploits, and issues all over the program. And that's not even for self learning ais.
One thing that people need to understand about Sully is that he was not only a very capable pilot with a very calm and grounded personality, he was also good at playing "what if" games in his head.
So, I suspect that he had already contemplated this scenario and came up with some alternatives.
I trust more "an expert" that says 'I don't know for sure', than the ones that answer binarily by saying 'yes or no' so bluntly.
My wife is somewhat slowly learning how to drive; she has problems with perceiving distance, speed, and direction (if she’s supposed to turn left, and it’s a new trip, she may well turn right) as well as mental mapping. She can drive the car fine so long as I’m co-piloting.
Anyway, she often will ask me at stop signs, and unprotected left turns, “can I go?”; meaning, is there enough space from the oncoming car to go. Usually when she asks me this, if I was driving, the answer would be yes. But what I always tell her is: if you aren’t sure, the answer is no. I will then tell her once she has gone, I would have gone back when she asked, but she has to judge herself. I also firmly believe that I have the ability to tell her not to go, but that it is dangerous, sitting where I am, on the opposite side of the car from usual, to make the positive decisions for her.
These are also dangers of AI.
Very, very good talk!!!
You've neglected two entire new areas:
1. Military unmanned companion fighter aircraft, assisting (currently) piloted fighter.
2. Totally automatic (but supervised) eVTOL air taxis' along with auto-ATC.
What an excellent discussion. Much appreciated.
Such a clear view of what AI actually is. After all the nonense in the newspapers. Thanks.
This conversation was amazing!Its the most clear explanation of what AI really is, that I've heard up to know. I'm gonna be sharing this video with a lot of people! Peter, Im a huge fan of your utube channels! You too have a gift of laying complicated topics down in clear digestible terms! Thank you!!👏🏼👏🏼🙏🏼
I think AI could have a role in training situations. Imagine sitting down at your computer and having a discussion with Brilliant AI on any topic. Airlines could have AI instructors that help staff learn different things. I know I learn better by hearing things explained. But it's not always convenient to go to a scheduled class. If a class has an AI instructor then I could take that course at any time anywhere.
I have seen simulations that showed that Sully could have made it back in time if he went right away. This isn’t a knock on him, he was following the procedures laid out and did a fantastic job. The goal for AI should be the ability or diagnose the problem right way and give guidance to the pilots to help them out.
It would have been better if it was a Boeing...
Petter, this was one of the most interesting vids you’ve done! You’re one of the good folks. Your intellectual curiosity, along with with your commitment to truth is what this works needs.
It appears Marco is another such fellow. AI is a potentially alarming technology, but it’s reassuring understanding its limitations. 👍
This is the best explanation I ever had about AI, what it can do and cannot. You guys did an amazing job here.
I tried Marco's little experiment, but instead of asking for an airplane without front windows, I asked for an airplane without a windshield and got 3 out 7 pictures that filled that criteria.
@CaptHollister Could you share with me the link to the image generated? I tried 10 pictures with your prompt and still was unable to get the result intended. Having said so, and as stated in the video, I fully expect it to soon be able to generate the expected result as more data is fed to it.
In the 1960's my mother was a ward clerk in the hospital. It was her job to enter notes into patient records and to take those records to where the records were stored.
Hospitals no longer empoy such people, it's all done on computers now.
In fact, before the 1960's "computer" was a job, not a machine.
I work in AI and can confirm that Marco really knows what he's talking about. Still, I feel a few possibilities were missed. One of the good qualities of computers and AI is that they don't get bored, or nervous. What about getting AI to perform the "pilot monitoring" function? We wouldn't want it to make decisions but it certainly could watch everything the pilot and plane are doing and make comments. What about using it in a pilot's walkaround? Unlike a pilot who is perhaps inattentive after making hundreds of uneventful walkarounds, the AI would catch anything out of place.
Finally, AI won't take many jobs any time soon but elevator operator is a really bad example. It is highly unlikely that many operators were retrained as elevator maintenance people. Even if they were, a couple of elevator maintenance people can service hundreds of elevators. Any time technology makes a lot of changes to the world, some people will lose their jobs but new jobs will be created. The emphasis should be on learning new things and not counting on keeping the same job for life.
Yes, that's true, they don't get bored or nervous, they don't care whether they or the passengers survive in an emergency either. I think AI would be a useful helping tool for complicated stuff, including aviation. It has in a way been used in many fields in the past with good results, as well as in aviation. So, I'd guess it will still be used in the same manner the more tech improves. But no, it won't ever fly planes on its own. It might take over simplistic or procedural jobs, piloting a plane is neither of those. Procedures were put in place to make complicated stuff easier and safer to manage, when you find yourself out of those procedures or predefined parameters. But yeah, a clerk's job? Taken. Police? Taken. Mil? Taken. Any job that requires no thinking and just simplistically following procedures is going to be taken by AI. Just like those no thinking brainless factory jobs were taken by industrial automation. Scientists are safe, pilots are safe, teachers are safe, and despite the advances in AI art creation, artists are safe as well. AI could help as a tool in all those fields, but it cannot take them over cause it's garbage in all of them. And yes I agree with you, that it will help people learn new things, have more time to spare, etc. Although the power structure of the world will resist, and they depend their powers on others blindly obeying. So whether we'll get AI to help us improve our lives, or whether they first use it against us, remains to be seen.
Perhaps if Ai was a check pilot. How many investigators have you shown where the crew was disoriented or things changed abruptly and the crew is inputting conflicting controls. Then someone would say a cryptic word instead of “hey, you guys need to put the nose down as the plane is going to stall “. Ai can bypass emotions, fear of being wrong or embarrassed to say what you mean because the captain is so experienced. Perhaps Ai, who has been watching the flight can verbally suggest solutions. You rely on buzzers or a light that says on pilot is pulling up while the other down. The check pilot Ai could step in when the human is frozen due to information bias or tunnel vision. We all understand it happens, but hindsight is 20/20, so let’s get a set of eyes to challenge a potential set of situations and speak up. This is one application that can help in my opinion. Perhaps it can even help with traffic control.
I think this is finally a realistic view of AI. Why do all the other 'AI experts' and their millionaire/billionaire investors simply go mad and illogical when discussing AI... all hand movements and all..
Great video,
Whilst, it is suggested, the AI cannot fly the aircraft, there seems to be a role for an overarching AI to monitor the flight and flag up if the flight is operating outside the expected parameters and flag to the pilots (and to the ground) that there is something “weird” about the flight. There have been a number of accidents where pilots have not picked up that they/their aircraft are not doing what would be normal.