Dr. Wayne Taylor - Taylor Enterprises, Inc.
for Engineers and Quality
Subscribe to our Web Site
Selecting Statistically Valid Sampling Plans
Form 483 is used by the Food & Drug Administration (FDA) for reporting adverse findings resulting from one of their inspections. Numerous 483's have cited sampling plans as not being "statistically valid" or as lacking statistical justification. So what does it take for a sampling plan to be statistically valid? Selecting statistically valid sampling plans is a two part process. First, one must clearly define the objective of the inspection. Second, one must demonstrate that the sampling plan allows this objective to be met. This process will be demonstrated through a series of examples.
HOW SAMPLING PLANS WORK
Sampling plans are used to make product disposition decisions. They decide which lots of product to accept and release and which lots to reject and either rework or discard. Ideally, a sampling plan should reject all "bad" lots while accepting all "good" lots. However, because the sampling plan bases it decision on a sample of the lot and not the entire lot, there is always a chance of making an incorrect decision. The behavior of a sampling plan is described by the sampling plan's Operating Characteristic (OC) curve. Figure 1 shows the OC curve of the single sampling plan with sample size n=50 and acceptance number a=1. The bottom axis gives different process percent defectives. The left axis gives the corresponding probability of acceptance. For example, find 3% on the bottom axis. Then draw a line up to the curve and then across to the left axis. The corresponding probability of acceptance is 0.56. Likewise the probability of accepting 1% defective lots is 0.91 and the probability of accepting 7% defective lots is 0.13.
Figure 1: OC Curve of Single Sampling Plan n=50 and a=1
OC curves are generally summarized by two numbers: the Acceptable Quality Level (AQL) and Lot Tolerance Percent Defective (LTPD). The AQL is that percent defective with a 95% percent chance of acceptance. From Figure 1, the AQL is 0.72% defective. At the AQL, 95% of the lots are accepted. The LTPD is that percent defective with a 10% chance of acceptance. Its value is 7.6%. At the LTPD, 90% of the lots are rejected.
What protection does the single sampling plan n=50 and a=1 provide? Since sampling plans make accept/reject decisions, this question is best answered in terms of what the sampling plan accepts and what it rejects. The AQL describes what a sampling plan will accept. Lots at or below 0.72% defective are routinely accepted. Routinely, in this case, means at least 95% of the time. The LTPD describes what the sampling plan rejects. Lots at or above 7.6% defective are routinely rejected. Routinely, in this case, means at least 90% of the time. Lots between the AQL and LTPD are sometimes accepted and sometimes rejected.
You should document the AQLs and LTPDs of all your sampling plans. The AQLs and LTPDs of Mil-Std-105E and ANSI/ASQC Z1.4 sampling plans can be determined using Table X of those standards. The software accompanying Taylor (1992) can be used to obtain the AQL and LTPD of any sampling plan.
JUSTIFYING SAMPLING PLANS
Documenting the protection provided by your sampling plans is half the job. You must also provide justification for these AQLs and LTPDs. This requires that the purpose of each inspection be clearly defined. Sampling plans can be used for a variety of objectives depending on past history and other circumstances. Let us look at a couple of examples. All these examples assume that one is inspecting for major defects. It is common practice to assign an AQL to such group of defects. The AQL given in the specification is not necessarily equal to the AQL of the sampling plan. To make this distinction clear, the AQL given in the specification will be referred to as Spec-AQL. Informal standards have developed in different industries as to appropriate Spec-AQL's. In the medical device industry, major defects are generally assigned AQLs of 0.4%, 0.65% and 1.0% defective. In the pharmaceutical industries, AQL's of 0.25%, 0.4% and 0.65% are common. Each industry has its own practices. All the following examples assume a Spec-AQL of 1.0% defective.
The way Spec-AQLs are commonly interpreted is that lots up to the Spec-AQL should be released while lots above the Spec-AQL should be rejected. The Spec-AQL therefore represents the break even quality between accepting the lot and rejecting the lot. Lots above the Spec-AQL are best rejected. Lots below the Spec-AQL are best released. For lots below the Spec-AQL, the cost of 100% inspection exceeds its benefits in terms of fewer defects going to the customer. It is not in your customer's best interest to spend $1000 to 100% inspect a lot if only 1 defect is found that otherwise would have cost the customer $100. Ultimately the customer ends up paying the cost.
Spec-AQLs should not be interpreted as providing permission to produce defects. Given a sample containing a single defect, it is best to release the lot. However, it would be better not to make the defect in the first place. Manufacturing should strive for zero defects. However, once the lot of product is produced, the Spec-AQL provides guidance on making product disposition decisions.
Within an industry, Spec-AQL's are generally consistent. However, the same is not true of the LTPD's of the sampling plans used. As the following examples demonstrate, selection of the LTPD depends a variety of factors including past performance, other controls that are in place and the types of potential failures. Different manufacturer's and even plants from a single manufacturer can have dramatically different LTPD's.
Example 1: If the process consistently produces lots above the Spec-AQL, all lots should be 100% inspected. If at least some of the lots are below the Spec-AQL, one could use a sampling plan to determine which lots to 100% inspect. To insure that bad lots are routinely rejected, one might require that the sampling plan's LTPD be equal to the Spec-AQL. This insures that lots exceeding the Spec-AQL are routinely rejected. However, making the LTPD equal to the Spec-AQL risks rejecting some lots below the Spec-AQL. For a Spec-AQL of 1.0%, the single sampling plan n=230 and a=0 might be used. This plan has an LTPD of 1.0% and AQL of 0.025%. This plan has a sizable chance of rejecting a 0.5% defective lot (68% chance) even though the lot is below the Spec-AQL of 1%.
Example 2: This same sampling plan might also be used to inspect the initial lots from a new process for which there is no prior history. Before reduced levels of inspection are implemented, it should be demonstrated that the process regularly produces lots below the Spec-AQL. This can be accomplished by using a sampling plan whose LTPD is equal to the Spec-AQL on the first three lots. If all three lots pass, one has demonstrated that the process consistently produces lots below the LTPD. One can state with 90% confidence that each lot's defect level is below the Spec-AQL of 1%.
There is a simple formula for determining the sample size for such validation studies. Assuming that an accept number of 0 is used, then the sample size is: n = 230/Spec-AQL. Substituting 1% for the Spec-AQL in this formula gives n = 230/1% = 230. The sampling plan n=230 and a=0 was examined in Example 1. It has an LTPD of 1.0% and AQL of 0.025%. While passing this plan demonstrates the lot is below the Spec-AQL, failing this plan does not necessarily indicate the lot is bad. A 0.5% defective lot, while meeting the Spec-AQL is likely to fail this sampling plan. In many cases, other sampling plans are more appropriate. Suppose the process is expected to yield around 0.2% defective. In this case a sampling plan with an AQL of 0.2% and LTPD of 1% would be more appropriate. Using the software accompanying Taylor (1992), the resulting plan is n=667 and a=3. This plan has an actual LTPD of 1.0% and AQL of 0.21%.
Example 3: Having validated the process as consistently being below the Spec-AQL, future lots can be inspected to ensure highly defective lots are not released. It is decided that the sampling plan should ensure lots worse than 5% defective are not released. This requires a sampling plan with an LTPD of 5%. The sampling plan should also ensure lots below the Spec-AQL (1%) are released, i.e., the sampling plans AQL should be equal to the Spec-AQL. Table 1, reproduced from Taylor (1992), gives a variety of sampling plans indexed by their AQLs and LTPDs. The plans are given in the form "n/(a,a+1)" where "n" is the sample size and "a" the accept number. The single sampling plan n=125 and a=3 is the closest match. It has an LTPD of 5.27% and an AQL of 1.10%. This makes the release of lots worse than 5% defective unlikely. This plan is statistically valid for this purpose.
Example 4: Now suppose that we have a process that has run for six months. During this time 10,000 units were inspected with only 3 defects found. Based on these results, one can state that with 95% percent confidence the defect level is below 780 defects per million or 0.078% defective. Is this enough data to justify ceasing the inspection? While this is good data to have, the above statement only applies to the previous six months. What about the next six months? From a reliability point of view, 6 months without a major process breakdown allows one to state with 95% confidence, that the mean time between failure (MTBF) exceeds two months. It is possible that 2 or 3 major process breakdowns could occur over the next six months. Acceptance sampling may still be required. But now the focus changes to the detection of major process failures.
Suppose that one failure mode for a liquid filling process is a broken filter. Past experience indicates that such a failure results in a defect rate of 15% or greater. The single sampling plan n=13 and a=0, with an LTPD of 16.2% and AQL of 0.4%, is statistically valid for this purpose. The LTPD of 16.2% defective ensures that if the failure mode occurs, the resulting 15% defective lots are rejected.
Sampling plans are not required for all failure modes. Pressure gages on both sides of the filter could instead be used to detect a broken filter. Potential failure modes can be identified using a tool called failure modes and effects analysis (FMEA). One should then ask the question, "For which failure modes is the sampling plan the only means of detecting the failure?" For these failure modes acceptance sampling plans should be selected which provide a high probability of detecting such a failure should one occur.
Example 5: When washing surgical gowns, one potential failure mode is not adding detergent. A single unit can be tested for detergent residue. This amounts to using the single sampling plan n=1 and a=0 with an AQL of 5% and LTPD of 90%. However, since this failure mode results in a 100% defective product, this is a statistically valid sampling plan for this failure mode.
Example 6: Finally, one might have a proven process for which extensive mistake proofing and other controls have been implemented so that the likelihood of a process failure going undetected is low. At this point, one could consider eliminating acceptance sampling altogether. However, one might desire to continue collecting at least some data on a routine basis so that the process average can be trended. Whenever data is collected from a process, no matter what the reason, a formal procedure should be in place for accepting and rejecting product.
Suppose we have such a process and the primary reason the process is running so well is that it is being control charted using five samples every hour. Suppose that one day a sample of five was collected where five units were defective. Obviously, this lot should be rejected. Despite the good process history, mistake proofing and other controls, this sample provides conclusive evidence that the process has failed. Requiring a formal procedure for accepting and rejecting lots, whenever data is collected from the process, ensures no lot is released when there is conclusive evidence that it is bad. The single sampling plan n=5 and a=0 with an AQL of 1.0% and LTPD of 37% defective is valid for this purpose. Passing this plan does not ensure the lot is good. However, failing this plan is strong evidence that the process is above 1% defective and that a failure has occurred. When justifying this plan, one should acknowledge that passing this plan does not ensure the lot is good. Past performance, mistake proofing, and other controls ensure good quality. The objective of the sampling plan is to ensure obviously bad lots are rejected. To this end, n=5 and a=0 is a valid plan.
Other examples of this use of acceptance sampling include the inspection of 3 units per supplier lot and the functional testing of a single unit per lot. Such sampling plans may detect a problem. However, acceptance of a lot does not ensure the lot is good. They are probably best viewed as an audit of the process. However, such procedures can be justified based on past performance, other controls in place, and other uses of the data including trending.
Example 7: Suppose that following a 100% inspection, the single sampling plan n=13 and a=0 is used. This sampling plan has an LTPD of 16.2% defective. The fact that a 100% inspection is being performed indicates that the process cannot be relied on. Is an LTPD of 16.2% sufficient? It is important to recognize that the 100% inspection is the critical operation. The follow-up 13 samples serve primarily as a check on the effectiveness of the 100% inspection. While ideally the 100% inspection should eliminate all defective units, in practice many 100% inspections only eliminate 90% and even 80% of the defectives. Further, the number of defective units missed can increase due to fatigue, line speed and other factors. Since the sampling plan does not provide sufficient protection to demonstrate that the 100% is working properly for each lot, it is essentially an audit of the 100% inspection process. The results obtained should be trended or accumulated over time to determine the effectiveness of the 100% inspection. Further, before relying on 100% inspection, initial data should have been gathered to determine the effectiveness of the 100% inspection. This data will help justify the low level of inspection following the 100% inspection.
Once the purpose of the n=13 and a=0 sampling plan is clarified, one should also consider if there is a better way of ensuring our objective is met. For example, challenging the 100% inspection with known defects on an hourly basis might be more effective at ensuring the 100% inspection remains effective.
Using a sampling plan from a recognized source such as Mil-Std-105E does not ensure that the sampling plan is statistically valid. One of the sampling plans in Mil-Std-105E requires 2 samples and accepts on 30 defects. This sampling plan would not be valid for the inspection of critical defects on a medical device. Such standards contain a wide variety of sampling plans as they try to accommodate a wide variety of industries and products. It is up to the user of these standards to ensure they are using an appropriate plan. Different standards use different strategies for indexing their sampling plans. But, regardless of the source of the sampling plan and how it is indexed, it is the actual AQL and LTPD of the sampling plan that describes its protection and that determines whether it is valid.
To select a statistically valid sampling plan, first, the objective of the inspection should be determined based on past performance, other controls that are in place, potential failure modes and so on. Then the AQL and LTPD of the sampling plan should be documented to demonstrate that the sampling plan meets this objective. Further, since different sampling plans may be statistically valid at different times during the life of a process, all sampling plans should be periodically reviewed. If you don't know the protection provided by your sampling plans, a good first step is to document all of your sampling plans AQLs and LTPDs. Then for each sampling plan you should ask, "Is the protection appropriate based on past performance and current controls?"
Presented at 1996 Annual Quality Congress, American Society for Quality
Appeared in Quality Engineering, Volume 10, Number 2, p. 365-370, 1997, ASQ and Marcel Dekker
Copyright © 1996,1997 Taylor Enterprises, Inc.
Copyright © 1997-2017 Taylor Enterprises, Inc.
Last modified: September 08, 2017