Aircraft Cockpit Automation - Good or Bad

somashekar

Staff member
Super Moderator
#11
Aircraft Cockpit Automation is Good.
But where do you draw the line between Pilot Automation Pilot.
The recent incidents seems literally a fistfight between Pilot and Autopilot. This was not the intention at all in the first place of Cockpit automation.
When you compare the cruse control in cars to its driver, the driver gets back control the moment he operates any of the controls at his command. I also think its the driver who decides when he will go onto the cruse control.
Likewise the Pilot has to get back the control of the aircraft the moment he operates any of the control at his command.
If any of the inputs to the autopilot is missing or malfunctioning, certainly an alarm should indicate this and give the complete control to the pilot. All the training a pilot gets is not by seeing out of the glass, but by seeing into the control panel. Yet he sees out and around to know that he is on the right course. If he cannot see under night conditions, then still he has several indicators that can help him maneuver and take a safe course. This becomes more essential in any abnormal conditions.
If you remove the man from the machine under almost all conditions, why have him as the pilot at the first place.
 

Marc

Captain Nice
Staff member
Admin
#12
If you remove the man from the machine under almost all conditions, why have him as the pilot at the first place.
Pilots probably will eventually be essentially eliminated. They will probably be "replaced" with "technicians" who monitor the systems.

As it is with the evolution of what will eventually be fully automatic cars, it will take time - But some day the systems will be reliable enough. For example: GM's driverless car petition to NHTSA enters public comment phase

Truck drivers like me will soon be replaced by automation. You're next | Finn Murphy
 

Ronen E

Problem Solver
Staff member
Super Moderator
#13
Driverless trucks are already a reality in some Australian mine sites.

Besides, it's not an issue of absolute safety vs. some low failure rate (the former being an unrealistic aspiration IMO). The perspective should be overall safety with automation vs. without. Statistically, aviation harm rate is extremely low and keeps declining. I believe that statistically harm rate with autonomous cars will quite quickly become much lower than it is today.
 
Last edited:

Randy

Super Moderator
#14
From the movie "Fail Safe" 1964
[On the reliability of computers.]
General Bogan: Mr. Knapp here knows as much about electronic gear as anyone. He'd like to say something.
Gordon Knapp: The more complex an electronic system gets, the more accident prone it is. Sooner or later it breaks down.
Secretary Swenson: What breaks down?
Gordon Knapp: A transistor blows . . . a condenser burns out . . . sometimes they just get tired--like people.
Professor Groeteschele: Mr. Knapp overlooks one factor, the machines are supervised by humans. Even if the machine fails a human can always correct the mistake.
Gordon Knapp: I wish you were right. The fact is, the machines work so fast . . . they are so intricate . . . the mistakes they make are so subtle . . . that very often a human being just can't know whether a machine is lying or telling the truth.

Fail-Safe (1964) - IMDb
 

optomist1

A Sea of Statistics
Trusted
#15
agree with most, yet an over reliance on a micro and software can lead to decreased basic flight skills proficiency....and now as we are seeing revealed at least according to reports; the FAA ceded review approval for MCAS to the Airframe Mfg?
 

Ronen E

Problem Solver
Staff member
Super Moderator
#16
I know quite little about flying airplanes and pilots' competency or training, so please don't take what I'm about to say too specifically. I do still think though that the following applies in general.

I think that the approach "Technology will always fail at some rate - albeit negligible by any standard - therefore we should post a human guard to oversee it" is outdated, impractical and might actually result in higher overall risk. Randy's post is a nice illustration, and it's quite amazing someone understood it so long ago... IMO the way to deal with tech risks is more risk management, more hypothesizing / simulating (which becomes so much more powerful now with the advent of AI), more redundancy, more testing/verification/validation and so on. I think that building an autonomous car and then expecting an idle driver to stay alert and in shape to identify and properly react to emerging risks and failures is unrealistic and self-defeating. If I go on an autonomous car I want to be able to tell it where I want to go then go to sleep or read a book until I get there. I don't want to nanny it.

The real disturbing question is how do we make sure the algorithms are right - the car will need to make life and death decisions at some points, sometimes choosing between two "bad" options, even in the absence of any failure within it. These may involve moral issues rather than technical ones.
 

optomist1

A Sea of Statistics
Trusted
#17
great points....in both cases (less so in the aviation arena) we are cutting new paths especially in the apst 7-10 years. An colleague of mine served in the military, claims that for many situations when pseudo autonomous systems go south or glitch, "battle override, manual intervention" is an option, albeit a last resort. Your thoughts?
 

Marc

Captain Nice
Staff member
Admin
#18
The problem is recognizing when pseudo autonomous system "goes south or glitches". And often a symptom has multiple potential causes.

In the recent crashes the pilots apparently did not understand what was happening. Another example is the Air France Flight 447 - Wikipedia where the pitot tubes both froze up.

When I learned to fly years ago the mantra was "always trust your instruments". In general it is sound advice. Problem is - what if one or more sensors fail, especially in today's highly technology environment? Then, of course, is the question - Is this a sensor failure confusing the computer? Or is the computer its self faulting for some reason? Things can start going wrong so fast it can be difficult to diagnose what is happening.

But to your post - a manual over ride is essential. I was watching a report on the news this morning. A pilot was in a simulator and pointed to 2 switches. He said: "Had the pilots understood the aircraft systems, and had the pilots understood what was happening, by throwing these two easy to reach switches they would have disengaged the autopilot and regained full manual control of the airplane". Pilots, on the other hand, are complaining that Boeing did not tell them about the system details.

As a pilot I can tell you that it is one thing to be in a simulator - It is quite another thing to experience events happening in real time. There can be confusion and panic, no matter how experienced you are. There are times where someone can say "Well, if they had just...". It all sounds so simple until you experience a failure, or multiple failures, while in flight. Another aspect is that a symptom often has multiple potential causes (thus all the check lists) as I said above.
 

optomist1

A Sea of Statistics
Trusted
#19
The problem is recognizing when pseudo autonomous system "goes south or glitches". And often a symptom has multiple potential causes.

In the recent crashes the pilots apparently did not understand what was happening. Another example is the Air France Flight 447 - Wikipedia where the pitot tubes both froze up.

When I learned to fly years ago the mantra was "always trust your instruments". In general it is sound advice. Problem is - what if one or more sensors fail, especially in today's highly technology environment? Then, of course, is the question - Is this a sensor failure confusing the computer? Or is the computer its self faulting for some reason? Things can start going wrong so fast it can be difficult to diagnose what is happening.

But to your post - a manual over ride is essential. I was watching a report on the news this morning. A pilot was in a simulator and pointed to 2 switches. He said: "Had the pilots understood the aircraft systems, and had the pilots understood what was happening, by throwing these two easy to reach switches they would have disengaged the autopilot and regained full manual control of the airplane". Pilots, on the other hand, are complaining that Boeing did not tell them about the system details.

As a pilot I can tell you that it is one thing to be in a simulator - It is quite another thing to experience events happening in real time. There can be confusion and panic, no matter how experienced you are. There are times where someone can say "Well, if they had just...". It all sounds so simple until you experience a failure, or multiple failures, while in flight. Another aspect is that a symptom often has multiple potential causes (thus all the check lists) as I said above.
Great post, although likely with much less left seat time than you folks, but I pretty much got the same advice from my instructor, a former P-51/F-4 pilot, and were instruments are present one can always use others to cross check...yet again as someone once removed, re: AF 447 how does one design and certify an +$150M aircraft and NOT require multiple heated pitot tubes? As I recall they were not heated, yet icing conditions and their impact have been common knowledge right?? RE: fuel... GE, Pratt, Roll Royce and Air Framers, must pass pretty intense icing at altitude tests. Set me straight here if I'm off base... a Pitot Tube Air Speed system is say $30K or more, to heat the same is ?? additional $$.
 

Miner

Forum Moderator
Staff member
Admin
#20
Despite the saying that nothing is certain but death and taxes, I would add the equipment failure and software bugs are certain. It's only the timing that is uncertain. No matter how good the software is, it is only as good as how well the programmer foresaw all possible eventualities. No one is good enough to foresee all possibilities.

As long as driverless cars share the roads with cars with drivers, there will be problems. A coworker was bragging about his new car that automatically applied the brakes if the car ahead slowed down. A week later, he was complaining about the same feature because another driver cut across three lanes of traffic (Chicago) and caused his car to brake suddenly and hard. A human driver would have recognized the type of behavior and either let up on the gas or braked gently until the car passed across the lanes. Even if 100% of cars are driverless, how well will they handle black ice, wind shear, a flooded road, etc.

The same thing applies to planes. Software is great for routine, repetitive situations, but there will always arise a situation beyond the softwares programmed limits. AI isn't here yet.
 

Top Bottom