Glossary & References

This section contains additional detail and external references for some of the models and techniques mentioned in the main sections. In order, the topics are:

Linkage of Process (LOP) Map

This is a high level view of an organisation’s main processes, the linkages between them and where appropriate, the information that flows between them. As in the example below, it can be split into three sections called Driver, Core and Support, as a means of separating the function of each process within the organisation. For simpler maps, this may not be necessary. An LOP is easy to create and can have a number of uses:

  • As a overall view of a complex organisation, where departments are often “siloed” and the complete end to end view is rarely seen. It can facilitate identifying potential areas for improvement.

  • As a simple method for prioritising focus for improvement, within an organisation’s processes, via the application of a rating (condition and impact) to each process.

  • As insurance against creating unintended consequences in connected processes, when a particular process is being changed or improved. It’s easy to forget that a change to a particular process can affect others.

 

SIPOC

This stands for Source > Input > Process > Output > Customer (the customer may be the next downstream process or the final external customer, depending on where the process sits). It's a table with those headings and can be modified to include other items if required, such as System and Operator. Like the LOP, it is easy to create and can have a number of uses:

  • As a compliment to the LOP, where a lower level of detail is required for one or more processes. Useful for mapping the “current state” of an organisation.

  • As a tool for gathering requirements prior to designing a new process or modifying an existing one.

  • As a final detailed procedure, which may need to include task identifiers (P1, P2 etc), IT systems used and operator/agent identifiers (job roles, titles etc).

Approaches to Improvement

TQM (Total Quality Management)

Popular in the 1980's, it's focus was on improving processes to improve customer satisfaction. Typical tools used were cause and effect diagram, check sheet, control chart, histogram, pareto chart, scatter diagram, stratification and the PDCA cycle.

Ref: https://en.wikipedia.org/wiki/Total_quality_management

Six Sigma

Successor to TQM, the term was coined by Motorola in 1993 and refers to a defect rate of less than 3.4 defects per million opportunities (DPMO). Tools are similar to those used in TQM, plus a few more. Improvement projects follow a number of phases defined as DMAIC (Define, Measure, Analyse, Improve, Control) The focus is on variation and stability and is well suited to high transaction processes.

Ref: https://en.wikipedia.org/wiki/Six_Sigma

Lean

This is all about identifying and maximising what is of value to the customer and in doing so, eliminating waste. It was pioneered by Toyota in their Toyota Production System (TPS).

Ref: https://en.wikipedia.org/wiki/Lean_manufacturing

Common Terms

Some Japanese, and one German, term, seen in the area of waste reduction:

  • Andon: Stop signal. (when there is a quality problem)

  • Genchi Genbutsu: Go and see for yourself.

  • Hansei: Reflection. (thinking over – linked to Kaizen)

  • Heijunka: Level out the workload. (see Muda, Muri and Mura)

  • Jidoka: Stop when there is a quality problem.

  • Kaizen: Continuous Improvement.

  • Kanban: Visual signal. (to signal a “pull” request)

  • Muda: Waste

  • Mura: Unevenness

  • Muri: Overburden

  • Nemawashi: Decision by consensus, with all options considered. Rapid implementation

  • One Piece Flow: Just in time put into practice. Minimum waste

  • Poka-Yoke: Mistake Proofing

  • Takt Time (German): Rate of customer demand

Ref: https://en.wikipedia.org/wiki/Toyota_Production_System

PDCA Cycle

A3 Report

The example below illustrates the format, which is created on an A3 sheet of paper, hence its name. Even though it is designed to be a working summary, it still contains the essential detail to allow understanding, consensus and approval with all teams as the project progresses.

Control Charts

Control charts were invented by Walter A Shewhart, back in 1924 and provide an excellent way to monitor a process over time. They can indicate when a process is stable or unstable, they can detect various forms of special cause variation and they can confirm the impact of process improvement activities. There are different types of charts used for continuous data (height, weight, temperature, etc) and attribute data (count, percentage, yes/no, etc). Some of the most common chart types are shown below.

ChartTypes.gif

subgroups

You can collect data for your charts, either as individual data points or as a small number of data points at the same time, which are then averaged out to become a single point on your chart. This is called subgrouping and has the ability to separate the sources of variation, as explained below.

There are two charts created for each of the continuous data types (XmR, XbarR and XbarS). The X chart monitors the data values and the R/S chart monitors the data ranges. With the exception of the XmR (individuals chart where the subgroup = 1 data point), they use the concept of stratifying data observations into rational subgroups which allows the X chart to focus on the variation of averages between subgroups and the R/S charts to focus on the variation of averages within subgroups. A common method is to use time order for the stratification, which allows detection of the causes of variation that occur over time, although other factors of interest may be used, such as a particular supplier or machine. In general you would form subgroups so that there is the minimum chance for variation to occur within that subgroup.

sample size

Control charts rely on normal, time ordered data for their ability to accurately detect special cause variation, although the normality limitation can be mostly overcome by using a suitable sample size. Most SPC software applications will suggest an optimum size but a general rule would be to have a minimum of 25 consecutive subgroups or 100 consecutive data points, with charts for attribute data requiring more data points than do charts for continuous data.

Interpreting the Charts

There are a number of patterns that can be used to indicate any special cause variation that may be present in your process. The patterns are usually defined in “sets” which have been developed by different individuals and organisations over time. An example, shown below, is a set from “Introduction to Statistical Process Control”, 4th edition - Douglas C. Montgomery.

Control Chart Rules

Control Chart Rules

VariationRulesDrawing.png

Whilst these rules will give a good indication of potential instability, you still have to relate them to your particular process and its environment to discover root causes.

This short introduction to control charts should give you an idea of what they are and how they can help you manage your processes effectively. They can supply far more information than their predecessor, the humble run chart, and are well worth exploring in more detail. There is a wealth of on-line information available on this topic.

Statistical Terms and Tests

Many of the tests used in process analysis can be performed in native spreadsheet application such as Microsoft Excel, using their Analysis ToolPak add in and other included functions, although as with control charts, a dedicated SPC application will make life much easier. What follows are definitions of some terms used, what the tests do and how to interpret the results. I have only included some of the most common terms and tests applicable to process analysis, in order to illustrate what can be achieved.

Hypothesis Testing

Before using tools such as t-tests, f-tests, ANOVA and Chi-Square tests, a null hypothesis (Ho) should be created which essentially states that there is no statistically significant difference or relationship between variables, hence the word "null". There is also an alternative hypothesis (Ha) which states the opposite. For example, if you are testing whether two samples have means which are statistically the same, then Ho simply states that the means are the same.

Interpreting the Results

The test will either reject or fail to reject the null hypothesis (never to accept, as statistics has been called the art of never having to be sure) at a certain significance level, called alpha, which is usually set at 0.05 (5%). A common test output is a p-value, which is the probability of the null hypothesis not being supported by the test statistic (output). So you would reject Ho if P<=0.05.

In addition to p-values, some tests also create different statistics together with associated "critical values", which can be used to further test Ho. If the test statistic is lower than the critical value, or within it (for 2 tailed test), you would not reject Ho.

In the table below, I have assumed that the software used will calculate a “p-value”, where appropriate.

Stats Definitions.gif
Stats Tests.gif