In the realm of product development, there is a widespread evaluation scenario. It is the practice of conducting tests to prove a product is ready for launch. The tests are constructed so that when passed, the product has earned the right to exit development and go into production. There is nothing wrong with establishing tests and doing your best to pass them.
Tests are a good way to compare performance to requirements. They provide a snapshot of how the embodiment of your product design is performing at any given time. Testing can reveal both positive and negative things relative to standards, benchmarks and requirements. If one has a comprehensive set of requirements, then what is learned from tests can be equally comprehensive. In fact, it is 1:1. If you can successfully fulfill each requirement through the passing of each test, then you have direct evidence that the product is able to meet what is required. But what do we really know about the subtleties of our designs by the running and passing of tests?
How do we actually learn about our designs? I believe that development should be viewed as the path to successful learning, which can lead to successful testing. If just enough "development" is done in order to pass a test (most likely in a hurried fashion) - what does it mean to "pass"? What if you pass most tests but fail a few and then make some adjustments or changes and then pass? What do you know now about your technical design?
When I was a professor, I had certain students who were passing the tests, but gave me an uneasy impression they were not learning the material well enough to serve them in the long run. I started asking questions, initiating discussions, and it did not take long to discover that students could pass tests and know very little engineering. With a mix of partial competence and luck, one can do just enough to pass a test. The same pattern is readily found in product development organizations. The development team can do just enough analysis, very limited prototype testing and a few constrained iterations to get past the validation tests. With a little luck, statistics are just able enough to be in their favor to get by. While some teams may get caught with their hand in the cookie jar, many will succeed in passing the tests and their products go to market. One visible result of this practice is the high occurrence of recalls in the automotive industry. They are not alone in this partial-development scenario where test results are over-valued and product development teams "whistle past the graveyard" on the way to launch.
I would like to call us all to a renewed attention to developmental learning and the mechanisms that enable it, which are not new. They are 1) analytical model building and 2) empirical model building, both performed prior to any tests. Math modeling, necessary to practice analytical model building, is taught extensively in engineering schools across the world. Empirical design of experiments, necessary to practice the empirical model building, is not so widely taught. A form of experimentation called DOE (Design of Experiments) is a series of configurable methods to plan, conduct and analyze prototype designs, designed to be changed so we can learn from the purposeful changes. This has nothing to do with tests or the passing thereof; it is about really understanding your developing product!
Experimentation takes time and requires resources well beyond those required of tests. Experimentation will slow you down, make you think, force you to learn. Experiments provide a trail of insight and build judgment. They enable you to answer questions of how and why and what makes the design perform under nominal and stressful conditions. With this kind of knowledge, your design can easily pass tests if the physics is capable.
I want to encourage you all to step back and think of the value of developmental experimentation vs. the passing of tests. Tests serve a purpose, but they do not provide the insights that experimentation does. Spend time with your leaders and peers and enjoy a healthy discussion over the virtues of designed experimentation - the kind that use orthogonal arrays to prevent multi-collinearity in your data sets. If you run experiments without the disciplined pattern of an orthogonal array, your data could provide misleading results. Study the implications of multi-collinearity in your data. Once you understand what ad-hoc data gathering by undisciplined, under-designed experimentation can do to your credibility, you will embrace the beauty and value of DOE's. Enjoy taking the time to truly conduct developmental learning - then use it to go pass those tests and build better, more reliable products--again and again!
|