Product Development Systems & Solutions Inc.-click to homepage
News from PDSS Inc.
"Leading the Future in Product Development" 
September 2015- Vol 8, Issue 9
In This Issue
Data Integrity: Adjustable Prototypes (3 of 3)
More About Richard Feynman
The summer series on data integrity in technology and product development concludes with a discussion of adjustable prototypes. Skip cites one of his favorite physicists, Dr. Richard Feynman!
-Carol
Data Integrity: Adjustable Prototypes (3 of 3)
This series covers three interrelated topics as they affect the integrity of functional response data during development of technology and product performance capability. They are:
  1. Measurement Systems (July newsletter)
  2. Sample Size (August newsletter)
  3. Adjustable Prototypes
Our data integrity series concludes with Adjustable Prototypes. The best way we have to validate our ideas, concepts and analytical math models (our best guess about the expected behavior of the future product) is to construct physical versions of them. To set the stage of how to gain trust during the development of knowledge we can do no better than to refer to the Scientific Method as explained by the late, great Dr. Richard Feynman.
 
Feynman says, "In general, we look for a new law by the following process. First, we guess it (audience laughter), no, don't laugh, that's really true. Then we compute the consequences of the guess, to see what, if this is right, if this law we guess is right, to see what it would imply and then we compare the computation results to nature, or we say compare to experiment or experience, compare it directly with observations to see if it works. If it disagrees with the experiment, it's wrong. In that simple statement is the key to science. It doesn't make any difference how beautiful your guess is, it doesn't matter how smart you are who made the guess, or what his name is... If it disagrees with experiment, it's wrong. That's all there is to it." (this is from a video of Dr. Feynman giving a lecture at Cornell. See the Youtube video of the lecture).
 
At the beginning of product development we generate a number of ideas, convert them into better, clearer expressions of reality which we call concepts. Next we put them through the Pugh Concept Evaluation and Selection process to search for evidence of feasibility and superiority (without actually building and running them). We down-select the best hybrid concept that meets our selection criteria against the datum concept we are trying to beat. This concept is what Feynman calls a "guess". A concept is our attempt to guess or hypothesize how best to make functions happen in light of a set of requirements. Then we parametrize the selected concept to get very specific about the mechanisms, the components that are available to analytically represent how our highly feasible concept functionally works. Summarizing: Ideas convert to detailed expressions we call concepts and concepts are then converted into math models that attempt to explain how they function to produce a desired, trusted future repeatable result: Y = function of Xs. This is what Feynman calls a "computation result". Next we must generate a real version of the concept to explore its nature; its physical, functional behavior. This, Feynman calls "experiment, experience -observation"!. In Feynman's view, if our math disagrees with our physical results - the math is to be considered "wrong".
With all that said, our focus in this final part of this 3 part series is the linkage and synchronicity between a physically adjustable, testable prototype, sample size and the measurement system used to generate data.
Here is one way to design and build physical prototypes:
  1. Build a static, unchangeable prototype that is the best rendition of your preferred idea/concept/math model progression. Test it against your requirements and see if it meets them. This can be done with a simple, inexpensive one-sample t-Test of the sample mean vs. your target value. If the test is successful - you are done and you can ship the product. If not, proceed to step 2.
  2. The previous test was found to fail to meet the requirement. The design was revisited by walking back through the idea/concept/math model representation stream and candidate flaws or weaknesses were identified. Alterations, changes are considered and agreed upon by conducting a design corrective action review. A new version of the prototype is produced as a static, improved version. Now a new kind of test has to be run -a 2 sample t-Test where the means of the 1st and 2nd static prototypes are compared to each other to see if the new one is better than the old one.
  3. If the first two static prototypes are found to still be falling short of fulfilling the required performance then we continue the "build-test-fix" iterations until we finally get the design to meet the requirements.
This is an unfortunately common design iteration called the Build-Test-Fix cycle. The good news is that it generally works - eventually. It takes a long time, ever deepening experience, strong technical judgment and a bit of luck. It is often practiced on designs that are run on lab benches as isolated subassemblies and subsystems under nominal conditions. When these building blocks are integrated as static, bolt-together elements - even more build-test-fix iterations are required to find and fix the weaknesses, sensitivities, unintended consequences and anomalies that are hidden away inside and across the interface relationships that the integrated system contains. The build-test-fix method searches and repairs under the banner of "rapid" corrective action. What is predictable, from a project management view, is a broad range of iterative cycles that are very hard to quantify in any formal schedule. All you really can prepare for is a lot of sequential corrective actions. Stress testing is generally limited to system reliability tests that contain limited amounts of accelerated or stress conditions that are used to provoke additional rounds of corrective action. This is why there are so many product recalls.
The Value of Adjustable, Modular Prototypes
The construction of functional flow diagrams, parameter and noise diagrams, along with Design FMEAs and, for complex systems, Fault Tree Analysis, all prior to the final design of subassembly and subsystem prototypes is the basis for preventing problems that result in serial rounds of build-test-fix cycles leading to long development cycle times.
When prototype architectures include physically adjustable parameters and modular design elements that are easily replaced during the runs of Designed Experiments, plans for carefully thought out experimental learning paths can be exploited for shortening the time it takes to learn how to control the design elements (subassemblies and subsystems). Learning efficiency is greatly enhanced by using a sequential flow of carefully planned, designed experiments that are first run under nominal conditions. Once the adjustable / modular prototypes yield set points that meet nominal requirements, a new round of experiments are designed to explore additional set point configurations that leave the design's functions insensitive to stressful sources of variation we call "noise". This learning process is called Robust Design. Robustness DOEs are first done on subassemblies and subsystems. Once we have learned how to make them robust to noise, they are then integrated as adjustable prototypes into a functional system. A new, final round of robustness experimentation can be conducted to explore system level balancing set points that use the adjustability of the sub-level designs to make compromises between the best set points for overall system performance. This enables reliability development while concurrently running Duane Reliability Growth experiments on the maturing, integrated system. This is all done prior to final reliability testing, which is done to assess the reliability developed using the adjustable prototypes. Once the design set points are frozen, normal and accelerated reliability tests can be run to verify the system's reliability meets expectations in the final phase of product development.
Without the use of capable measurement systems, proper sample sizes and adjustable / modular prototypes, development will take too long, yield anemic, under-developed learning and will certainly lead to costly ongoing development cycles long after the product was claimed to be ready for launch. This is why post-launch design re-work and the tying up of resources is so common in many forms of build-test-fix strategies.
If you want to go reasonably fast during product development - use the methods of Critical Parameter Development such as we are suggesting as the way to do things right the first time and you will cycle through the development work flow as quickly as the laws of physics will support!
 
More About Richard Feynman
Here's one of my favorite explanations by Richard Feynman about a very popular "guess" people like to pursue... here is his telling of his response to a person he met who believed that UFO sightings were real and evidence of extraterrestrial life visiting earth.
"So I said I don't believe in flying saucers. My antagonist says 'Is it impossible that there are flying saucers? Can you prove it's impossible?' I said no, I can't prove it's impossible. It's just very unlikely!" Feynman goes on to say that it is scientific to say what is more likely and what is less likely. Feynman summed up his reply to the man who believed in flying saucers like this:
"It is much more likely that the reports on flying saucers are the result of the known irrational characteristics of terrestrial intelligence, rather than the unknown rational efforts of extraterrestrial intelligence. It's just more likely, that's all. And it's a good guess. We always try to guess the most likely explanation, keeping in the back of our mind that if it doesn't work, then we must discuss the other possibilities."

This is from Feynman's book, The Character of Physical Law. Other books by Feynman I recommend are:

The Feynman Lectures on Physics,

Surely You're Joking, Mr. Feynman and

What Do You Care What Other People Think?

 

 
Is there a topic you'd like us to write about? Have a question? We appreciate your feedback and suggestions! Simply "reply-to" this email. Thank you!
  
Sincerely,
Carol Biesemeyer
Business Manager and Newsletter Editor
Product Development Systems & Solutions Inc.
About PDSS Inc.
Product Development Systems & Solutions (PDSS) Inc.  is a professional services firm dedicated to assisting companies that design and manufacture complex products.  We help our clients accelerate their organic growth and achieve sustainable competitive advantage through functional excellence in product development and product line management.
  
Copyright 2015, PDSS Inc.
Join Our Mailing List!
 
See PDSS Inc.'s Archived E-Newsletters