In his book, “Fooled by Randomness,” celebrated author Nassim Nicholas Taleb says, “This high-yield market resembles a nap on a railway track. One afternoon, the surprise train would run you over.” This is a dig at stock-market trading based on the calculation of “odds.” In other words, probability.manage2_IN

Allow me to elaborate.

The foundation of most statistical analysis, and hence inference and prediction, is probability. By definition, probability is the likelihood of the occurrence of an event or an experiment yielding a certain outcome (event). To apply this concept to a roll of an unbiased dice with six sides, theoretically, the probability of rolling a six (or any particular number) is 1/6.

The key word we conveniently miss out on is “theoretically.” Theoretical because the observed occurrence of an event achieves the number suggested by the classical calculation of probability when we perform an infinite number of trials of the experiment or runs of a process.

For example, it would be nice if a probability of 1/6 of rolling a six on fair dice means that on every sixth roll, or on one in six rolls, I’ll get a six. We all know that is not how it works. The definition does not claim this either. Even if, for the sake of argument, we could hit “infinite” tries, the information still would not enable us to make a useful decision, because of possible event clustering.

Consider a process operating at Six Sigma levels. The implication of this is that the process produces 3.4 defective products for every one million products it produces. The probability of getting a defective item is less than 1 in 300,000. The probability of, for instance, my wife and I both getting a defective item is infinitesimally small. But it happens pretty often. And on paper, we still have that spectacular probability number, since the machine autocorrected (after it produced 10 defective parts in a row) to produce the next 3 million parts correctly. In quality terms, this is called a “special cause variation.” In my opinion, any phenomenon that has a name occurs fairly often. So, special causes aren’t really that special.

This brings us to a concept I call “instantaneous probability,” which is an extension of the concept that the outcome of each trial of an experiment is independent of the outcomes of previous trials.

I define instantaneous probability (IP) as the probability of an event occurring, based on the possibility of its occurrence, i.e. if it is possible for an event to occur, it may occur, independent of the odds.

This translates into the following corollaries:

IP is zero in the case of mistake proofing to ensure the event cannot occur.

IP is one in the case of mistake proofing to ensure that the event occurs.

IP is 0.5 if the event is not mistake proofed one way or the other.

To make sure we’re all on the same page here, let us consider a customer ordering a meal at a restaurant with no set menu. The probability that the chef receives an order that he can prepare is infinitesimally small. The calculation is tedious, inaccurate and worst of all useless, since it doesn’t help ascertain, with any degree of accuracy, whether or not what the customer orders can be prepared. The simple representation of IP is 0.5 since the chef may or may not be able to prepare what the customer orders. There are two alternative paths to ensure that every order received can be prepared. The first is to equip the kitchen with every known ingredient, every piece of culinary equipment in existence and chefs who know every recipe ever conceived. The end goal still may not be achieved since some recipes require an extended preparation or cooking time.

Alternatively, the way the orders are accepted can be changed. It’s a no brainer in my opinion. So a menu is introduced, which is presented to the customer before they order. Now, while the probability of the customer ordering something the chef can prepare increases dramatically, the IP still stays at 0.5. This is because the customer might still order something, which, in his opinion, fits into the general theme of the menu. Based on the IP of 0.5, change is still required, so now the waiter is instructed to only accept orders from the menu. Voila! The IP now catches up with the probability as they both reach the magical figure of one, and the chef will never have an order he can’t prepare. Just to state the obvious for the sake of completeness, the IP of the chef receiving an order he can’t prepare is now zero, and we can rest in peace on this front.

The intermediate values that classical probability takes tend to mislead the best of us into a false sense of security. The bottom line is that no matter how small the probability of occurrence of the event, the non-occurrence of the event is never guaranteed. Invariably, unfavorable events associated with the smallest probabilities of occurrence have the most catastrophic effects. This view of the world is supported by chaos theory, the thermodynamic law of entropy, the more humorous Murphy’s Law, by hundreds of millions of insurance policies that are active, thousands of ex-policy holders who wound up on the street in spite of insurance, and even a bunch of insurance companies that have gone out of business.

While we don’t have a choice in the matter in every scenario, it seems imperative that we do take action where possible. Probability oversimplifies our world view, but the alternative is even simpler—mistake proofing (Poka Yoke, as the Japanese call it), as explained in the example above.

The practical, day-to-day industry application of IP is only to provide a simple, zero calculation measure of whether or not a process needs work. This enables us to move on to the more important task—preventing defects. The end goal must be to mistake proof everything that is within our control, to make the result independent of the individual (and dependent only on the process), and to mistake proof the outcome of the process, i.e. to enhance the process until the IP of the occurrence of a failure changes from 0.5 to zero.

At first glance, not much seems to be in our control. But opportunities present themselves if we look carefully. All that is needed is to identify failure points, figure out the root cause, and install a failsafe.

Let’s take one of our day-to-day processes for example—reporting or MI generation, since it’s a common process in most industries and is fraught with issues. Vendor invoices have as many formats as there are vendors. Customers invariably mess up forms while filling them in. Employees will obviously make mistakes in manual data entry. The list is endless, and well represented at every step of the process. To design this process for Poka Yoke, we could simply have the data made available in a database, accessible to anyone based on login credentials. Whoever needs a report would log in, mention the data and format specifics, and extract a report.

Unfortunately, most of us have running reporting processes, which are far from efficient and error free. Once we begin with the elimination of unnecessary reporting and fancy formatting, we can treat each remaining process as a separate entity to Poka Yoke to ensure that a couple of inefficient stages do not rob us of a good rolled throughput yield. For instance, we can take a hint from the restaurant above and standardize the inputs to our process, both in terms of data and report requests, by the use of drop down menus, radio buttons, other data validation, compulsory fields, etc., to ensure complete and valid requests. Elimination of all unnecessary steps, followed by application of “Jidoka” or “Autonomation” to all subsequent steps will ensure that we have an error free and efficient process, all without calculation of probabilities.

 Don’t reduce the probability of an error, make the IP zero. Instead of designing for Six Sigma, the focus must be on design for Poka Yoke. Instead of trying to achieve functionality, we must work to ensure it.