Boeing new issue with 777X engine support

Sidney Vianna

Post Responsibly
Leader
Admin
So the idea that a cracked part is not a failure or at least a defect is as erroneous as it is dangerous. Our words matter. The parts are not intended to crack. This is exactly like the challenger: the clevis and tang were not intended to separate (as evidence by soot on the o-rings); the fan blade on the plane that was flight UA 232 that disintegrated and tore thru the one point in the plane where all 3 hydraulic lines came together wasn’t supposed to be cracked. A crack presents a weaker system. A crack will propagate because the part isn’t going to get stronger and the stresses aren’t going to get weaker. This is a classic stress-strength interaction. Either the part was too weak or the stress was too great. Since stress is not a very controllable condition the path to a solution most likely lies in the strength of the part. This is a fairly straight forward causal mechanism search. How did the part crack; how do you prevent cracks in the future…I’ve lead coached and trained literally thousands of these in my career in automotive aerospace and medical devices…What Boing will do is something I cannot predict.
Yes, obviously a cracked part is a failure. Causes can be multiple, including design, material, production, inspection, excessive vibration, excessive stress, excessive loading, etc. The detection of early, unexpected failures is one of the objetives of a type certification program, exactly what this fleet of 3 aircraft is undergoing at this moment.

Finding these failures prior to regulatory approval and avoiding similar issues during the commercial cycle saves their customers, the airlines, a huge amount of money, disruption and loss of end consumers goodwill.

The 777X is one of the most complex and sophisticated commercial aircraft types out there. If it were not for the 787 Max program catastrophic roll out, the 777X would be flying commercially by now.
 

Tidge

Trusted Information Resource
What do you folks think should be the process Boeing should follow in the root cause investigation?

I can't speak to Boeing, by I would suggest two parallel investigations:

1) The "finger-pointing team" to focus on the faulty part in isolation, to understand if the physical parts meet their specification.

2) The "engineering team" to investigate what the specification of the part should be, given the system and its intended use.

I've worked with too many leaders who prefer to point fingers; only occasionally has the fault been a supplier/inspection error. The finger-pointing team can include the original engineers responsible for the original specification, and also serves the purpose of getting them out of the way of the second team.
 

Wes Bucey

Prophet of Profit
Ronan, Bev, Sydney, and Tidge make valid points. I was imprecise in my use of the sole word "failure" when I was thinking " failure in function" not "discovered flaw in the part which could/would ultimately fail in function."

I do want to call Sydney's attention to the fact Boeing has produced 20 more 777 to this design and such hubris BEFORE FAA approval could mean extensive and expensive redesign and rework if the root cause is determined to be design fault.
 

Attachments

  • Boeing new issue with 777X engine support
    1724533291983.png
    87 bytes · Views: 44

Sidney Vianna

Post Responsibly
Leader
Admin
I do want to call Sydney's attention to the fact Boeing has produced 20 more 777 to this design and such hubris BEFORE FAA approval could mean extensive and expensive redesign and rework if the root cause is determined to be design fault.
Whatever the costs are for retrofitting all 20 aircraft with a sound part, it is minuscule compared to the total costs of doing the same years later with a huge fleet already in service.

I will continue to disagree with the underlying criticism, subject of this thread. BCA is being transparent and safety-centric. As I said, this design is hugely complex and anyone who expects this validation cycle of the aircraft without some hiccups is being naive. Let’s not forget that during all the testing of a commercial aircraft type, the hardware goes through extreme conditions. Fully loaded aborted takeoff, for example. So all in all, for the 777X, the main issue is the “snail pace” of the certification schedule, which, in my estimation and as already mentioned, was probably highly impacted by the 737 Max debacle and the strained relationship with the FAA, themselves under huge scrutiny by politicians and the public at large.
 

Wes Bucey

Prophet of Profit
Whatever the costs are for retrofitting all 20 aircraft with a sound part, it is minuscule compared to the total costs of doing the same years later with a huge fleet already in service.

I will continue to disagree with the underlying criticism, subject of this thread. BCA is being transparent and safety-centric. As I said, this design is hugely complex and anyone who expects this validation cycle of the aircraft without some hiccups is being naive. Let’s not forget that during all the testing of a commercial aircraft type, the hardware goes through extreme conditions. Fully loaded aborted takeoff, for example. So all in all, for the 777X, the main issue is the “snail pace” of the certification schedule, which, in my estimation and as already mentioned, was probably highly impacted by the 737 Max debacle and the strained relationship with the FAA, themselves under huge scrutiny by politicians and the public at large.
My comment was essentially what Quality has meant since Deming and Juran - don't make bad products BEFORE they're shipped. YOU CAN'T INSPECT IN QUALITY by culling and replacing parts - that's Bandaid and duct tape alley mechanic stuff. My British friends still use the term "penny wise and pound foolish."

My comment was directed at the FMEA PROCESS. IF and I stress IF the design of the support system is at fault, the cost will be a lot more for 20 planes already built, impacting supply lines and manufacturing facilities, not to forget impact on stock investors,
 

Ronen E

Problem Solver
Moderator
I can't speak to Boeing, by I would suggest two parallel investigations:

1) The "finger-pointing team" to focus on the faulty part in isolation, to understand if the physical parts meet their specification.

2) The "engineering team" to investigate what the specification of the part should be, given the system and its intended use.

I've worked with too many leaders who prefer to point fingers; only occasionally has the fault been a supplier/inspection error. The finger-pointing team can include the original engineers responsible for the original specification, and also serves the purpose of getting them out of the way of the second team.
According to this logic, every time a technical failure occurs during validation the engineering team needs to be replaced...? This is not how product development works (or should work, IMO). Peer review, design review, alternative calculations (during verification) and so on are all standard practice in modern development methodology, and are all targeted at getting second opinions, "fresh pair of eyes", critical thinking, hubris-proofing or whatever you want to call it - as an integral part of the process; not as an aftermath damage control.

Finding out if a failed part was within spec is a fairly straightforward task (especially in such a well documented and well equipped environment). It's a task for lab technicians and doesn't require a lot of engineering insight (though I agree it's good to have the original designers involved).

To me the tough question is not only whether the spec figures (or 3D shape) were adequate, but more so whether all the critical parameters (including aspects of raw material and fine details of processing, all the way to the finished part) have been called out, and whether the correct in-process tests/inspections have been specified at the right points along the process. In such critical and extremely loaded parts, every minute detail can be the one that makes the difference between critical failure and making it safe to the next preventive maintenance replacement.
 

Ronen E

Problem Solver
Moderator
Causes can be multiple, including design, material, production, inspection, excessive vibration, excessive stress, excessive loading, etc.
Whilst I don't exactly disagree, I think this list is too long. It all comes down to 2 elements: Design and Inspection (I prefer "verification" over "inspection" in this context; or more broadly, just letting true QA do its assigned job).

A failed part (any part, any sort of failure) is a testament that the design was inadequate. How or why it was inadequate are different questions, and the answers don't change the fundamental fact that the design was inadequate. All of this is based on the premise that a part should not go into service if it doesn't meet it's specified design (if the design is inadequately specified, or under-defined, that's another issue) - this is where the "inspection" (or verification) part comes in: Making sure that indeed the part put into service (or testing for that matter) meets the specified design. So - if that part failed, either the design was inadequate (including under-defined), or the process failed to spot that the part passed through even though it didn't meet the spec. All the rest are derivatives / subsets of these 2.

Material specification is part of the design, and process QA should have multiple elements in place along the way, to ensure that all the material characteristics that matter end up within the specified design.

Production in general should be controlled and QAed ("inspected"/verified). Free range / loose ended production can't be seriously assigned the title of "the cause". Not in something like building airplanes.

Excessive vibration, stress, loading, <you name it> etc. just means the design job was not done properly. It means that the figures the designers used were not realistic ones, so when the part eventually encountered the real ones, it was unable to cope (again, under the premise that it was manufactured to spec). I blame the design process here because there are 2 distinct activities/phases in the design process that are supposed to ensure "excessive" is not going to surprise us. The first is Design Input - this is where the designers come up with, justify, refine and defend the figures they design against (this is intended to be an iterative and ongoing process, throughout the development process). If the real ones are "excessive", it means the developers under-specified the functional & performance requirements. The other is Risk Management. Engineers / designers / developers are only human and not all-knowing. It's highly likely that in the design input refinement process they overlooked something, under-estimated, and so on. That's what risk management is for, and that's where redundancy comes into play (among other methods) - so that even if we mis-estimate or mis-assess, when "excessive" hits there is still some headroom to accommodate. In short - to me, the term "excessive" is synonymous to "we didn't conduct our design process the way we should have".
 
Last edited:

Tidge

Trusted Information Resource
According to this logic, every time a technical failure occurs during validation the engineering team needs to be replaced...?
I didn't write "replaced", I wrote "get them out of the way" by redirecting them towards a different area of investigation (supply, inspection, manufacturing) than specification development. In my experience immature organizations (in the CMMI sense) often struggle with foundationals such as.
Peer review, design review, alternative calculations (during verification) and so on are all standard practice in modern development methodology, and are all targeted at getting second opinions, "fresh pair of eyes", critical thinking, hubris-proofing or whatever you want to call it - as an integral part of the process; not as an aftermath damage control.
I suppose the choice is "review the review" or "review the output"?
 

Ronen E

Problem Solver
Moderator
I didn't write "replaced", I wrote "get them out of the way" by redirecting them towards a different area of investigation (supply, inspection, manufacturing) than specification development. In my experience immature organizations (in the CMMI sense) often struggle with foundationals such as.
To me this is hair splitting / word laundering. If you recommend taking the designers off the design-improvement job (that they were actually on), to do <whatever> (which is not the original design improvement), and you nominate others ("the engineering team" as you called it) to do it, you are practically replacing them.
I suppose the choice is "review the review" or "review the output"?
In the case of peer review, design review etc. - it's always "review the output". At least that's the intent.
"Review the review" belongs in the realm of internal audit.
Not to be mixed.
 

Tidge

Trusted Information Resource
Presumably... this part already went through the review as described by @Ronen E ... and since the question was "how to investigate", not "how to implement design controls", I don't think my response is "hair-splitting".

I'm not going to apologize for suggesting that the design team responsible for a design which has failed be directed towards investigating the non-design issues (e.g. supplier controls) while a fresh pair of eyes is directed towards the design output. It's not as if stresses in metal parts are black magic. Perhaps others' experiences are different, but my experience with many engineers (and business unit managers) is that the default opinion is "someone else did something to my perfectly specified part that caused it to fail", so direct those people in a method aligned with their instincts.
 
Top Bottom