August 05, 2004

What can the 9/11 Commission Learn from Toyota? Plenty

Toyota Disaster Intelligence

Slate.com had a good article by Duncan Watts on lessons for the 9/11 Commission on intelligence gathering, from business in the wake of disaster, specifically Toyota.

It's largely a critique of the recommendation to centralize ultimate responsibility and decision-making in the hands of a Cabinet-level intelligence czar as being historically tempting, but almost always wrong-headed:

    When organizations fail, our first reaction is typically to fall into "control mode": One person, or at most a small, coherent group of people, should decide what the current goals of the organization are, and everyone else should then efficiently and effectively execute those goals. Intuitively, control mode sounds like nothing so much as common sense. It fits perfectly with our deeply rooted notions of cause and effect ("I order, you deliver"), so it feels good philosophically. It also satisfies our desire to have someone made accountable for everything that happens, so it feels good morally as well.

    But when a failure is one of imagination, creativity, or coordination - all major shortcomings of the various intelligence branches in recent years - introducing additional control, whether by tightening protocols or adding new layers of oversight, can serve only to make the problem worse.

Watts then turns to the 1997 Toyota catastrophe of their only factory making brake valve assemblies burned to the ground and threatened to shutter the company and bring screeching to a halt all 15,000 per day auto production. "Clearly, then, Toyota, along with the more than 200 other companies that are members of the extended Toyota group, had ample incentives to find a solution."

    they succeeded, but not in the way one might have expected. Rather than relying on the guidance and coordination of an inspired leader (control mode), the response was a bewildering display of truly decentralized problem solving: More than 200 companies reorganized themselves and each other to develop at least six entirely different production processes, each using different tools, different engineering approaches, and different organizational arrangements. Virtually every aspect of the recovery effort had to be designed and executed on the fly, with engineers and managers sharing their successes and failures alike across departmental boundaries, and even between firms that in normal times would be direct competitors.

    Within three days, production of the critical valves was in full swing, and within a week, production levels had regained their pre-disaster levels. The kind of coordination this activity required had not been consciously designed, nor could it have been developed in the drastically short time frame required. The surprising fact was that it was already there, lying dormant in the network of informal relations that had been built up between the firms through years of cooperation and information sharing over routine problem-solving tasks. No one could have predicted precisely how this network would come in handy for this particular problem, but they didn't need to - by giving individual workers fast access to information and resources as they discovered their need for them, the network did its job anyway.

The central truth here is, it's much more important to build networks of informal and social relationships of people to get things done in the event of disaster and that, it's practically impossible to have a contingency plan for every surprise in business or in government. When surprises happen, it's more important to the survivability of the firm (and a nation's security) that its people come together to solve the problems created and get things back to normal as quickly as possible.

And, no centralized intelligence director can force that to happen.

- Arik

Posted by Arik Johnson at August 5, 2004 09:50 AM | TrackBack