This series covers three interrelated topics as they affect the integrity of functional response data during development of technology and product performance capability. They are: - Measurement Systems
- Sample Size
- Adjustable Prototypes
Last month addressed measurement systems. In this article, we address sample size: collecting the appropriate amount of data to learn about the parametric relationships that control the base-line and robust (stressed) functional performance of our systems, subsystems and subassemblies. Sample size is important because:
- It establishes a context for stating the risk associated with an assertion (hypothesis) we are testing.
- It commits material and human resource costs to the project for running Characterization Studies, Designed Experiments or t-Tests.
- It commits time to the project schedule for conducting the afore-mentioned tasks.
We begin with modeling and simulation (M&S) to tell us which parameters are important and well-behaved (we call them ECO: Easy, Common & Old) and which parameters are candidates for critical status because they are not well-understood (we call them NUD: New, Unique & Difficult). Physical prototypes must be used at some point after initial math modeling and Monte Carlo simulations have obtained reasonable estimates of how the design parameters work together to control the NUD Y as a function of X relationships. Iterative cycles of M&S, physical experimentation and verification of converging agreement between the analytical and empirical models must be done.
Sample sizes are easy to select during M&S when Monte Carlo simulations are conducted on math models. Many thousands of cycles can be run cost-effectively based upon random iterations of the probabilistically exercised "Y as a function of X" relationships that underwrite the integrity of our expressions. These are entirely based upon the laws of physics/chemistry, etc. When we move on to design, build, instrument and use adjustable, modular prototypes, the topic of sample "runs" is more challenging. It is here where risk, cost and time are negotiated between the technical team and the project manager.
Larger sample sizes drive our resolving power to see engineering-significant differences from experimental results. Sample sizes that are too small risk not detecting a meaningful change when you need to! You could miss the truth about a set point change that might be significant for your design. This is particularly true when the differences are subtle. Understanding the metrics of adjustability, hyper-sensitivity and X-variable contributions to Y-variable behavior, in light of independence and interactivity, may be compromised.
In summary, by settling for too-small sample sizes, you can end up misleading yourself, your technical team, the project manager and your upper management team responsible for your business. The lack of technical understanding of potential critical parameters can significantly affect reliability, safety and customer satisfaction. In a word - you miss the chance to prevent a real problem that will hide until later.
Cost, Project Cycle Time & Sample Sizing
The issue of sample sizes and their effect on prototype costs, experimental time durations, and resource costs can be contentious. Studying design capability performance and identifying critical parameters without using appropriate sample sizes is a setup for design scrap and re-work cycles. If you invest in proper definition of sample sizes in the context of understanding the truth about your null and alternative hypotheses based upon 95% confidence intervals (see tutorial, below, for an explanation of these concepts), you are pro-actively preventing down-stream development re-work. If you restrict sample sizes and force developmental learning short-falls, your schedule will likely slip later because the lack of learning at the right stage of the development process. You will force the technical team to revisit the design's surprise difficulties that have accumulated and revealed themselves late in the development cycle. The only time it is more costly and damaging to your business is when the under-developed product gets released to unsuspecting customers.
Doing things right the first time in technology and product development heavily depends on the proper determination of sample sizes based upon reasonable confidence intervals (typical between 90-95-99%). Ask yourself; can I stand behind my statements in a design review or am I pretending to know something when, in fact, I simply have rushed past realistic sample sizes that properly forecast future functional behavior?
|