If we consider any process, it could be the machining of a shaft
diameter or the time taken to answer the phone in a call centre, we know
it will vary. If we collect data over time there will be an average
diameter or answering time and the individual values will be dispersed
around the average.

f we know nothing about statistics and variation, then we may try to
adjust the machine or do something about call answering times each time
we see a value either above or below the average. But what about if the
diameter is too low and we adjust it upward but the next value would
have been higher anyway? Adjusting the process in this way just makes
performance worse, it is technically known as over adjustment.

If we plot the values on a chart, with the average value at the centre,
what are the chances of an individual value being either above or below
the line for the average? They are 50:50. What about two points in a row
being one side of the line (1 in 4), or three points (1 in 8). As the
number of points increases it becomes less and less likely that the
points will all be on the same side of the line.

We can work out that 8 points in a row on one side of the average has
only a 4 in 1000 chance of happening if the process is operating
normally, 9 pints indicate less than 2 in 1,000. So if this does happen,
chances are something is going on that we ought to investigate!

Statistical control is using probability theory to only adjust the
process when it is unlikely that the variation is caused by normal
process performance. A process with only variation caused by normal
process performance is said to be in control, or stable.

 

To read the full article, please click here.