“The only constant in life is change”
Despite the cliché, this quote from antiquity is more true today than it was then. And its meaning impacts more than just philosophy or relationships.
As the next part of my series of posts on the evolution of the manufacturing industry, I want to explain a topic that I’ve been forming for the past few years: Change understanding.
The second industrial wave was ushered in by the Ford Motor Company and its assembly line. Ever since then, manufacturing engineers have worked hard to continuously tighten processes and make them more predictable — all in the name of making them more efficient and reliable.
Using methods like Statistical Process Control (SPC) and Model-based systems engineering and establishing frameworks like Six Sigma, manufacturing engineers have been hard at work structuring, rigidizing, and making processes repeatable for decades.
In the 90s, robotics and other automation, such as automated optical inspection (AOI), brought more automation and control over manufacturing processes. However, many of these systems take immense amount of programming and setup to mimic (and often exceed) human precision. Behind these precise systems are still humans writing G-code and manually running big simulation jobs and exporting them to machines.
With all of this tightening and focus on control, the humans in the process — design engineers, technicians, manufacturing engineers, inventory managers — all fall into the grips of the “controlling system,” with the goal of achieving high predictability and, hopefully, quality. Do you want it to work? Well, then you had better fill out this form exactly right and send an email to the configuration management specialist using the subject line format shown in Section 2.1a of the factory manual ([CONFIGURATION CHANGE] Request to change PN 123–001 to PN 123–002).
Fortunately, the fourth industrial revolution is here, and the world is moving faster than ever. Unfortunately, however, manufacturing is still using many of the same tools and paradigms as previous decades. One of the cornerstones of these methods, as alluded to previously, is change control. If anything is to change, it has to go through some central system. With more software delivering value on the factory floor, those changes have to be updated in multiple locations. This is crippling and often leads to chicken and egg problems.
Let’s say an engineer receives data from the field about increased vibration loads on a part. The company’s policy is that specifications must be updated in the PLM system, which then provides a new part number. However, there are already parts out on the manufacturing floor with the old part number. What needs to happen, and in what order?
Chicken and egg cases like these, along with the increasing rate of change in hardware specifications for existing product lines, increased domestic and global competition, and a more dynamic supply chain means that companies are stuck between two competing priorities: optimizing for rigidity and control, while also optimizing for constant innovation and iteration of both design and processes.
Companies are stuck between two competing priorities: optimizing for rigidity and control, while also optimizing for constant innovation and iteration of both design and processes.
This conundrum is exacerbated by having people in the loop. Design engineers, quality engineers, and manufacturing engineers — all vying to exert their own influence on design innovation, quality, and speed, respectively, on the company. At the end of the day, these are all valid demands from the customer. And unfortunately, driving three optimization parameters into change control systems that have only been designed for one (or two, at most) breaks the systems, especially if those are legacy software systems with little incentive to innovate and help the company move into the 21st century.
A new way: Change understanding
Given the increasing variability, complexity, and interactions in complex environments like hardware manufacturing, there is a new way to deliver innovative hardware products, quickly and reliably. And that is to throw change control out of the window and replace it with change understanding.
Change understanding is the method of effectively taking in all of the possible data in a system and analyzing it for clues of things going wrong. In manufacturing, these data sources can be people, machines, materials, and business processes like purchase orders and customer deliveries. An effective change understanding system does not force its participants to shuttle all of its data into one central data store, nor does it gate transactions and interactions on its own monolithic rules engine to make sure each interaction is deemed “okay.” That’s too slow, overbearing, and, with the amount of innovation you really want in your system, likely inaccurate. Rather, such a change understanding system should be architected in such a way that providing data (or data exhaust) to the analysis node is incentivized and rewarded. For example, by providing composite ply cooling rates from Composites Workcenter 1, the analysis node provides back to Workcenter 1 probabilities on composites debonding based on all of the information it has from Composites Workcenters 2, 3, and 4, as well as an effective way for the supervisors of Composites Workcenter 1 to recall all of the potentially flawed product it sent to the next workcenter in the last 24 hours.
We know that the proliferation of mobile phones has exploded the interaction volume by people in the workplace. We know that connected sensors and machines are producing more useful data than ever. We also know that new discoveries in materials (e.g. composites) and material techniques (e.g. 3D printing and generative design) are managing information in a more digital fashion (though, the physical problems aren’t resolved with just speed). None of this increased data volume nor probabilistic material innovation is useful if the data is being gated and manually transacted into the same stop-and-go systems we’ve used for welding, forging, and riveting processes of the last century. Moreover, it’s not just that the systems are not well-architected. It’s that they assumed low-connectivity across people, machines, and materials — an assumption that has since been flipped on its head.
In order to make that flip work, we need to ensure that we are making use of that data to continue to deliver high-quality products. Understanding and acting rather than controlling and enforcing is the only way that we are going to be able to collectively interact to deliver meaningful transportation, exploration, and energy products that we need to advance humankind in the 21st century. By accepting — not stifling — the increased volume of data, we will be able to do this faster and more reliably than our industrial predecessors could have dreamed about.