Product Development Systems & Solutions Inc.-click to homepage
News from PDSS Inc.
"Leading the Future in Product Development" 
May 2015- Vol 8, Issue 5
In This Issue
The Remedy for Rapid Build-Test-Fix Problems
The exploration of the shortcomings of the rapid Build-Test-Fix product development method continues this month. Of course, the solution to the problems inherent in Build-Test-Fix are the deliberate methods of Critical Parameter Development and Management. The first part of this article was in April 2015's issue.
-Carol
The Remedy for Rapid Build-Test-Fix Problems

Last month, we began a discussion of a product development process (PDP) known as rapid Build-Test-Fix. This is characterized by rapid cycles of building and testing a prototype, then determining and assigning corrective actions--plus new development--for the next iteration of Build-Test-Fix to move the product towards its pre-determined launch date. The project is run by reacting to discoveries in each cycle, rather than a more comprehensive planned approach. The project can be quickly overcome by unfinished tasks that will be addressed after the product is launched and in production-a costly plan!

 

Intentional activities in the rapid Build-Test-Fix model include:

  1. Prototype Builds, prints and specification documents that enable the Builds.
  2. Defining test protocols and conducting the tests on the prototype.
  3. Identifying short-falls, failures and detection of defective prototype parts / materials; defect-based counts of performance failures. Using quality attribute data that is easily gathered.
  4. Reactive Iteration (although the number of cycles is largely unknown until the short-falls are encountered).
  5. Low cost supplier selection and use.
  6. Major milestone and launch dates. Deterministic, date-based scheduling of Milestone Reviews.
  7. Matrixed use of available resources with reasonable amounts and appropriate types of experience
  8. Documenting that the product performance is within specification limits (targets and ranges of acceptance).

In rapid Build-Test-Fix, the following activities are left to chance, assumptions, undirected work or are simply not done:

  1. The team's proficiency in engineering, marketing, manufacturing and supply chain management skills.
  2. Training and coaching the teams with specific development tools, methods and best practice (competence and use).
  3. Understanding of parametric design input-output relationships (physics-based cause and effect) relative to functional performance requirements (includes Critical Parameter identification, development and management).
  4. Preventive, planned workflow design and task execution.
  5. Designed contingency plans, including defined trigger points based upon measuring and tracking leading indicators of schedule slippage, and changes in the rate of the systematic product and sub-level design parametric relationship learning rates.
  6. Use of continuous variable measures of physical functions down through the system's hierarchy of inputs and outputs. Dependence on measurement systems and data acquisition skills using scalar and vector-based variables that directly sense the quantitative effects of intentional changes in design control variables.
  7. Optimization of robust and tunable sub-level and integrated system functions under realistically stressful production, shipping and use conditions that drive measurable reliability growth.
  8. Design and optimization of production tolerances to support functional reliability and product robustness in light of statistical methods from modelling and simulation, as well as adjustable development prototypes.
  9. Identification and management of Critical Parameters in the life-cycle management process after launch.

The combination of what is planned and what is left to chance in the rapid Build-Test-Fix model leads to the anemic development practices listed below:

  1. System integration occurs before subtle sub-level design hyper-sensitivities and anti-synergistic interaction are discovered. Sometimes, the product is launched into the marketplace before these are discovered.
  2. The path of nominal condition design modeling, simulation, and adjustable prototype construction for application of Designed Experiments is not followed.
  3. Robustness stress experimentation using DOE methods is bypassed both at the sub-level and system level.
  4. Metrics tracked during development do not measure how the various design elements work and interrelate to fulfill system functional performance. Instead, they measure quality attributes and failure modes.
  5. Appropriately designed data acquisition systems are foregone for the ease and convenience of counting failures and quality attributes. Copious attribute data is needed for any kind of statistical decision-making (inference). Establishing direct links between failure modes and root causes is extremely difficult.
  6. Design Failure Modes and Effects Analysis (DFMEA) is used to identify and fix failure modes. This lacks the foresight of measuring impending failures (rates of changes in parametric relationships that ultimately lead to failure) and preventing them. DFMEA should be used for developing Designed Experiments to understand the root causes of functional parameter variable behaviors. These are measured using the mean and standard deviation of continuous variable output responses. DFMEAs should lead to modeling, simulation and designed experimentation.
  7. Tolerances are frequently leveraged from legacy designs and reference standards as opposed to analytical and experimental exploration for optimum nominal and range set points in support of functional capability.
  8. Critical parameters, both in the product design and manufacturing and assembly processes, are not defined, understood, or managed for optimized capability (Cp/Cpk).
  9. Applied statistical methods for product and process development are under-utilized if not completely absent.

A great deal of what could and should be done to understand a product and its production systems is not done during development. The product is rushed past tasks, tools, methods and best practices for the sake of a pre-determined launch date.

The Build-Test-Fix development strategy can be transferred into production, continuing the reactive, corrective action cycles during Operations. There is no Design Guide to hand off to the production, service and supply chain support organizations. The costs of fixing the product are at their most expensive point--after launch--in the cost control curve for the life cycle of the product.

Incomplete, rushed development almost always assures there are missing facts that should have been pro-actively and purposely learned for the sake of the customer as well as the company. All of the negative results of practicing the Build-Test-Fix rapid development method can be avoided by practicing critical parameter development and management!

 
Is there a topic you'd like us to write about? Have a question? We appreciate your feedback and suggestions! Simply "reply-to" this email. Thank you!
  
Sincerely,
Carol Biesemeyer
Business Manager and Newsletter Editor
Product Development Systems & Solutions Inc.
About PDSS Inc.
Product Development Systems & Solutions (PDSS) Inc.  is a professional services firm dedicated to assisting companies that design and manufacture complex products.  We help our clients accelerate their organic growth and achieve sustainable competitive advantage through functional excellence in product development and product line management.
  
Copyright 2015, PDSS Inc.
Join Our Mailing List!
 
See PDSS Inc.'s Archived E-Newsletters