Home | ADCATS Info | Search | Site Map | Bulletin Board | Reports & Publications | Bibliography | Contact Us
Publication # 87-5
Home : Publication's Index : #87-5


Design Issues in Mechanical Tolerance Analysis

ADCATS Report No. 87-5
October 26, 1987

K. W. Chase
Mechanical Engineering Department
Brigham Young University
Provo, UT 84602

W. H. Greenwood
Sandia National Laboratories
Albuquerque, NM 87185


Reprinted from Manufacturing Review, ASME, vol 1, no 1, Mar. 1988, pp. 50-59


Table of Contents Remote | Post a comment concerning this paper.

ABSTRACT

Tolerance analysis is a valuable tool for reducing manufacturing costs by improving producibility. Several useful methods of selecting design tolerances are presented with examples. The common and more advanced tolerance analysis methods are also reviewed and evaluated. A simple new tolerance analysis model suitable for designers is described as an alternative to the advanced methods. It is much more flexible than the common engineering methods. For example, it can mix statistical and worst case components in the same assembly. Also, it includes a critical manufacturing variable that is often overlooked: "nominal shifts" or biased distributions.

1. Manufacturing Considerations in Engineering Design.

The wise specification of dimensional tolerances for manufactured parts is becoming recognized by industry as a key element in their efforts to increase productivity. Modest efforts in this area can yield significant cost savings with little capital investment. It is a prime example of the success that results from including manufacturing considerations early in the design process. Both engineering design and manufacturing personnel are concerned with the magnitude of tolerances specified on engineering drawings, as shown in figure 1.

Fig. 1. Assignment of tolerances concerns both
Engineering and Manufacturing

Engineers know that tolerance stacking or accumulation in assemblies controls the critical clearances and interferences in a design, such as lubrication paths or bearing mounts, and thus affects performance. Production people know that tight tolerances increase the cost of production. Tolerances also greatly influence the selection of production processes by process planners and determine the assemblability of the final product.

Tolerance specification, then, is an important link between engineering and manufacturing. It can become a common ground on which to build an interface between the two, to open a dialog based on common interests and competing requirements.

However, designers often assign tolerances arbitrarily or base their decisions on insufficient data or deficient models. Any resulting problems must be corrected as they arise during manufacturing planning, tooling and production. Clearly, today's high tech products and growing international competition require knowledgeable design decisions based on realistic models which include producibility requirements. Hence, several issues relative to tolerance specification methods are raised:

  1. How can we get Engineering and Manufacturing to communicate their needs effectively?
  2. Which tolerance analysis models are both realistic and applicable as design tools?
  3. What role should advanced statistical and optimization methods play?
  4. How can we get sufficient data on process distributions and costs to characterize manufacturing processes for advanced tolerance analysis models?

In the following discussion, several useful tolerance design tools are described with examples, some of which have not appeared in print before. Some of the limitations of the common engineering models for tolerance analysis are pointed out. In response to these limitations, a simple new model suitable for designers is presented, which has greatly increased flexibility and permits a more realistic representation of actual manufactured parts. Finally, advanced tolerance analysis methods are reviewed, with an evaluation of their potential for use in design.

2. Tolerance Analysis vs. Tolerance Allocation.

A central issue in tolerance specification is that engineers are more commonly faced with the problem of tolerance allocation rather than tolerance analysis. The difference between these two problems is illustrated in figure 2. In tolerance analysis the component tolerances are all known or specified and the resulting assembly tolerance is calculated. In tolerance allocation, on the other hand, the assembly tolerance is known from design requirements, while the component tolerances are unknown. The available assembly tolerance must be distributed or allocated among the components in some rational way. Analytical modeling of assemblies provides a quantitative basis for evaluation of design variations. The influence of the assembly model and the allocation rule used by the designer on the resulting tolerance allocation will be demonstrated.

Fig. 2. Tolerance Analysis vs. Tolerance Allocation.

3. Common Engineering Models for Assembly Tolerances.

If the manufacturing process for a part is known, such as turning or stamping, reasonable tolerances may be selected by following tolerance guidelines for the process. Company design manuals and industry standards also provide useful data. Tolerance build-up in assemblies may then be predicted by tolerance analysis. The basis of tolerance analysis in design is an analytical model for the accumulation of tolerances in a mechanical assembly of component parts. The two most common models used in engineering design are briefly defined below. A more complete treatment may be found in Fortini [1].

a. Worst case.

In a worst limits analysis, the assembly tolerance (TASM) is determined by summing the component tolerances (Ti) linearly. Each component dimension is assumed to be at its max. or min. limit, resulting in the worst possible assembly limits.

One-dimensional assemblies:

T= S T (1)

Multi-dimensional assemblies:

T= S (2)

where Xi are the nominal component dimensions and f (Xi) is the assembly function describing the resulting dimension of the assembly, such as the clearance or interference. The partial derivatives represent the sensitivity of the assembly tolerance to variations in individual component dimensions.

b. Statistical.

Component tolerances add as the root sum squared (RSS). The low probability of the worst case combination occurring is taken into account statistically, assuming a Normal or Gaussian distribution for component variations. Tolerances are commonly assumed to correspond to six standard deviations (6s)

One-dimensional assemblies:

TASM= [S Ti²]½ (3)

Multi-dimensional assemblies:

TASM= [S (f/xi)²Ti²]½ (4)

More general case: (other than 6s tolerance distributions)

TASM= Cf Z[ S (f/xi)²(Ti/Zi)²] ½ (5)

where Z is the number of standard deviations desired for the specified assembly tolerance and Zi describes the expected standard deviations for each component tolerance. Cf is a correction factor frequently added to account for any non-ideal conditions. Typical values for Cf are 1.4 or 1.5.

These common tolerance accumulation models have serious limitations, which will be discussed later.

4. Tolerance Allocation Methods

The rational allocation of component tolerances requires the establishment of some rule upon which to base the allocation. The following are examples of useful rules:

a. Allocation By Proportional Scaling.

The designer begins by assigning reasonable component tolerances based on process or design guidelines. He then sums the component tolerances to see if they meet the specified assembly tolerance. If not, he scales the component tolerances by a constant proportionality factor. In this way the relative magnitudes of the component tolerances are preserved.

Example 1. Worst Case Allocation by Proportional Scaling.

The following example is based on the shaft and housing assembly shown in figure 3. Initial tolerances for parts B, D, E, and F are selected from tolerance guidelines for the turning process, such as figure 4 [2].

Fig. 3. Shaft and housing assembly.

 

Fig. 4. Tolerance range of machining processes.

Tolerances are chosen from the middle of the range for each part size. The retaining ring (A) and the two bearings (C and G) supporting the shaft are vendor-supplied, hence their tolerances are fixed and must not be altered by the allocation process. The critical clearance is the shaft end-play, which is determined by tolerance accumulation in the assembly. The vector diagram overlaid on the figure is the assembly loop that controls the end-play. The average clearance is the vector sum of the average part dimensions in the loop:

Initial tolerance specifications:
Required Clearance

Average Clearance

= .020 +/-.015

= -A + B - C + D - E + F - G

= -.0505 + 8.000 - .5093 + .400 - 7.711 + .400 - .5093

= .020

Dimension

A B C D E F G
Average .0505 8.000 .5093 .400 7.711 .400 .5093
Tolerances(+/-)              
Design
  .008   .002 .006 .002  
Fixed
.0015   .0025       .0025

The clearance tolerance is obtained by computing the assembly tolerance sum by worst limits:
TASM = + TA+ TB+ TC+ TD+ TE+ TF+ TG

= + .0015 + .008 + .0025 + .002 + .006 + .002 + .0025

= .0245 (too large)

Solving for the proportionality factor:
TASM

P

= .015 = .0015 +.0025 +.0025 + P (.008 + .002 + .006 + .002)

= .47222

Note that the fixed tolerances were subtracted from the assembly tolerance before computing the scale factor. Thus only the four design tolerances are re-allocated:
TB= .47222 (.008) = .00378

TD= .47222 (.002) = .00094

TE= .47222 (.006) = .00283

TF= .47222 (.002) = .00094

Each of the design tolerances has been scaled down to meet assembly requirements as shown in figure 5. This procedure could also be followed assuming a statistical sum for the assembly tolerance (equation 3), in which case the tolerances would be scaled up. The results are summarized in Table 1.

b. Allocation By Constant Precision Factor

Parts machined to a similar precision will have equal tolerances only if they are the same size. As part size increases, tolerances generally increase approximately with the cube root of size [1]:

Tolerance Ti= P (Di1/3) (6)

where Di is the basic size of the part and P is the Precision Factor.

Fig. 5. Tolerance allocation by proportional scaling.

Based on this rule of thumb, the tolerances can be distributed according to part size as follows. Compute the Precision Factor:
Worst Limits Statistical
(7)

Then compute the component tolerances:

T1= P D11/3, T2= P D21/3, etc.

Example 2. Statistical Allocation by Precision Factor.

Compute the assembly tolerance for the shaft/housing assembly by a statistical sum:
TASM

.015²

= + TA² + TB²+ TC²+ TD²+ TE²+ TF²+ TG²

= (.0015² + .0025² + .0025²) + P²(8.02/3 + .4002/3 + 7.7112/3 + .4002/3 )

Again, the fixed tolerances are subtracted from the assembly tolerance before computing the precision factor.

Solving for the precision factor: P = .004836

Re-allocating:
TB = .004836 (8.00)1/3 = .00976

TD = .004836 (.400)1/3 = .00356

TE = .004836 (7.711)1/3 = .00955

TF = .004836 (.400)1/3 = .00356

The Precision Factor method is similar to the Proportional Scaling method, except there is no initial allocation required by the designer. Instead, the tolerances are initially allocated according to the nominal size of each component dimension, then scaled to meet the specified assembly tolerance. This procedure could also be followed assuming a worst limits sum for the assembly tolerance (equation 1). The results are summarized in Table 1.

Table 1. Comparison of Allocation Methods
    Proportional Precision Factor
Part Original
Tolerance
Worst
Case
Stat
6s
Worst
Case
Stat
6s
A .0015* .0015 .0015 .0015 .0015
B .008 .00378 .01116 .00312 .00967
C .0025* .0025 .0025 .0025 .0025
D .002 .00094 .00279 .00115 .00356
E .006 .00283 .00837 .00308 .00955
F .002 .00094 .00279 .00115 .00356
G .0025* .0025 .0025 .0025 .0025
ASSEMBLY TOL. .0150 .0150 .0150 .0150
PROP. FACTOR .47222 1.39526 .00156 .004836

*Fixed tolerances

5. Tolerance Allocation Using Optimization Techniques

A promising method of tolerance allocation uses optimization techniques to assign component tolerances such that the cost of production of an assembly is minimized. This is accomplished by defining a cost-vs.-tolerance curve for each component part in the assembly. The optimization algorithm varies the tolerance for each component and searches systematically for the combination of tolerances which minimizes the cost.

Figure 6 illustrates the concept simply for a three component assembly. Three cost-vs.-tolerance curves are shown. Three tolerances (T1, T2, T3) are initially selected. The corresponding cost of production is C1 + C2 + C3. The optimization algorithm tries to increase the tolerances to reduce cost, however, the specified assembly tolerance limits the tolerance size. If tolerance T1 is increased, then tolerance T2 or T3 must decrease to keep from violating the assembly tolerance constraint. It is difficult to tell by inspection which combination will be optimum. The optimization algorithm is designed to find it with a minimum of iteration. Note that the values of the set of optimum tolerances will be different when the tolerances are summed statistically than when they are summed by worst limits.

a. Cost-vs.-Tolerance Functions.

The key factor in optimum tolerance allocation is the specification of cost-vs.-tolerance functions. Several algebraic functions have been proposed, as summarized in

Fig. 6. Optimal tolerance allocation for minimum cost.

Table 2. The constant coefficient A may include setup cost, tooling, material, prior operations, etc. The B term determines the cost of producing a single component dimension to a specified tolerance.

Table 2. Proposed Cost-of-Tolerance Models
  Cost Model Author Ref
Reciprocal Squared A + B/tol Spotts [3]
Reciprocal A + B/tol² Chase&Greenwood [4]
Exponential A e-B(tol) Speckhart [5]

Little has been done to verify the form of these curves. Manufacturing cost data are not published since they are so site-dependent. Even companies using the same machines would have different costs for labor, materials, tooling and overhead.

Jamieson [6] reported a government study in which relative costs were determined for actual parts for several metal-removal processes. This seems to be the same data presented as a case study by Trucks [2]. Jamieson correlated the process/cost results with the tolerance-vs.-size chart of figure 4. This data was curve fit by regression analysis using each of the models shown in Table 2. Typical results are shown in figure 7. The reciprocal tolerance curve appears to fit the machining process data the best.

Fig. 7. Comparison of cost-vs.-tolerance models.

b. Tolerance Allocation by Lagrange Multipliers

A closed-form solution for the least-cost component tolerances was developed by Spotts [3]. He used the method of Lagrange Multipliers, assuming a cost function of the form C=A+B/tol². Chase and Greenwood [4] extended this to cost functions of the form C=A+B/tol as follows:

Eliminating l by expressing it in terms of T:

(8)

Substituting into the assembly tolerance sum:

(9)

Substitute this result in equation 8 to obtain the minimum cost tolerances. The numerical results for the example problem are shown in Table 3. The Setup Cost is coefficient A in the cost function. The Reference Cost and Reference Tolerance are used to compute coefficient B.

Table 3. Minimum Cost Tolerance Allocation
Tolerance Cost Data Allocated Tolerances
A+B/tol A+B/tol²
Part Setup
Cost
Reference
Cost
Reference
Tolerance
Worst
Case
Stat.
6s
Worst
Case
Stat.
6s
A $ .15

-

.0015*

.0015

.0015

.0015

.0015
B 4.75 $1.20 .008 .00241 .00789 .00305 .00936
C 2.50 - .0025* .0025 .0025 .0025 .0025
D 2.80 2.75 .002 .00207 .00711 .00159 .00576
E 4.50 0.88 .006 .00194 .00683 .00227 .00750
F 2.80 2.75 .002 .00207 .00711 .00159 .00576
G 2.50 - .0025* .0025 .0025 .0025 .0025
Assembly Cost $27.58 $28.98 $26.05 $43.11 $22.10
Acceptance Fraction 1.000 .9973 1.000 .9973  
True Cost $28.98 $26.12 $43.11 $22.16

*Fixed tolerance

Parts A, C and G are vendor-supplied. Since their tolerances are fixed, their cost cannot be changed by re-allocation, so no reference cost was required.

The "True Cost" is just the total cost of the assembly divided by the acceptance fraction or yield. Thus, the cost is adjusted to include a share of the cost of the rejected assemblies. It does not include any parts which might be saved by re-work or the cost of rejecting individual component parts. Vendor-supplied parts may be excluded by assigning them a zero setup cost. An interesting exercise is to calculate the optimum acceptance fraction, that is, the rejection rate which would result in the minimum True Cost. This requires an iterative solution. For the example problem, the results are shown in Table 4:

Table 4. Minimum True Cost
Cost Model True Cost Accept Fract Z
A + B/tol 26.10 .99477 2.789
A + B/tol² 21.71 .97066 2.178

The corresponding component tolerances would increase by the constant factor Zi/Z.

The advantages of the Lagrange multiplier method are:

  1. It eliminates the need for iterative solutions,
  2. It can handle either worst limit or statistical assembly models,
  3. It allows alternative cost-tolerance models.

The limitations are:

  1. Tolerance limits cannot be imposed on the processes. Most processes are only capable of a specified range of tolerance.

  2. It cannot readily treat the problem of simultaneously optimizing interdependent tolerance stacks. That is, when an assembly has more than one tolerance stack, with common component dimensions appearing in each stack, an iterative solution of a nonlinear set of equations is required.

Problems which exhibit these characteristics may be optimized using nonlinear programming techniques.

6. Limitations of Common Assembly Models

The common models for assembly tolerance accumulation have distinct limitations when applied to tolerance allocation:

  1. The Worst Limit model results in component tolerances which are tight and costly to produce.

  2. Statistical models allow looser tolerances, but often predict higher assembly yields than actually occur in production.

  3. For assemblies with small numbers of components, or which have one component tolerance which is much greater than the remaining components, the common Statistical model can give component tolerances which are tighter than if computed by a Worst Limit model.

  4. Statistical models assume manufacturing variations follow a Normal or classic bell-shaped distribution, symmetrically positioned at the midpoint of the tolerance limits. They do not take into account possible skewness or bias which are common in manufactured parts. Figure 8 illustrates the unexpected rejects which can occur when skewness and bias are not accounted for.

In an earlier paper by the authors, these limitations were discussed with several examples [7].

Bias results in a shift in the nominal dimension. It is particularly harmful, since it can accumulate in an assembly and cause unexpectedly high rejection rates. Virtually all manufacturing processes exhibit bias. Some processes are more prone to bias than others. Bias can occur from tooling or fixture errors, setup errors or tool wear. It may be deliberately introduced during setup to compensate for tool wear or to allow for re-work. It may naturally occur in a process as in thermal shrinkage of molded parts. Bias is as critical a factor in an assembly model as is the process capability or variance.

Fig. 8. Ideal vs. actual assembly distributions.

7. Estimated Mean Shift Model

A new model for assembly tolerance accumulation has been proposed which includes an estimate of expected bias . It is called the Estimated Mean Shift method because the designer must estimate the bias for each component in an assembly. This is done by defining a zone about the midpoint of the tolerance range as shown in figure 9, which is the probable location of the mean of a typical batch of parts. The midpoint tolerance zone is expressed as a fraction of the specified tolerance range for the part dimension, (a number between 0 and 1.0). If the process to be used to produce the part is closely controlled, a low mean shift factor may be selected, say 0.1 to 0.2. For less well-known processes, such as a part supplied by a new vendor, a larger factor, say 0.7 or .8, could be selected to account for the uncertainty. For common processes the factor could be selected on the basis of prior history from quality assurance data.

Fig. 9. The location of the mean is not known precisely.

Once the range of mean shift has been estimated for each component the assembly tolerance is calculated using the following model:

(10)

where mi is the mean shift factor for the ith component. The assembly tolerance in equation 10 has been split into two summations. The first term is the sum of the mean shifts added as a worst limit. The second term is the sum of the component tolerances added statistically. Thus we have the contributions of both mean shift or bias and part tolerance or variance on the resulting assembly tolerance. A more complete discussion of this model, with examples, may be found in reference 7.

Note the special case when all the mean shifts are chosen as zero. The resulting assembly tolerance reduces to the simple statistical model (equation 4). If all mean shifts are chosen equal to 1.0, it reduces to a worst limit model (equation 2). Thus, the Estimated Mean Shift model can simulate the entire continuum between these two extremes, as shown in figure 10.

Other advantages of the Estimated Mean Shift model include:

  1. Full flexibility to mix shift factors in an assembly. Some parts may be nearly worst case, while others nearly straight statistical. There is no need to penalize an entire assembly with a worst limit analysis because of one poorly controlled component.

  2. Thermal expansion, which must be treated as worst limit, may now be included in a statistical assembly analysis.

  3. Early in the design stage, when little manufacturing data are available, conservative shift factors may be assigned. Later, during production, as data becomes available, manufacturing systems analysts may substitute more precise values. This may allow tolerances to be loosened up so that production rates may be increased.

Fig. 10. Versatility of the Estimated Mean Shift Model

The Estimated Mean Shift model can provide a common ground for interaction between engineering and manufacturing. The engineer's attention is focused on manufacturing considerations and he must communicate with manufacturing to get the needed model data. He can convey the critical design parameters to manufacturing in a form that permits freedom to alter tolerances without violating design requirements. Manufacturing can communicate meaningfully to engineering in terms of the two quality assurance parameters most commonly used in statistical process control: mean and variance.

8. Advanced Statistical Tolerance Analysis Models

Advanced statistical methods are sometimes used for tolerance analysis because they permit non-Normal distributions as models of component variation. These methods can give much better estimates of the number of rejects than simple statistical analysis, when the component distributions are well-known non-Normal functions. However, they are not a convenient tool for tolerance allocation. A complete distribution for each component must be input into the assembly equation before the advanced models can predict the resulting assembly distribution and fraction of rejected assemblies. This is shown graphically in figure 11. By contrast, in tolerance allocation, the reject fraction is selected and the component tolerances are then scaled to meet the specified reject fraction.

Fig. 11. Advanced Parametric Tolerance Analysis requires
that distributions be selected for each component

Two advanced statistical methods which have been applied to tolerance analysis are Monte Carlo Simulation and the Method of Moments. These methods are explained excellently by Hahn & Shapiro [8], Evans [9,10] and Shapiro & Gross [11]. The evaluation which follows is taken from a study by Greenwood [12].

Monte Carlo Simulation uses pseudo-random number generators to describe a wide variety of distribution shapes. A random dimension for each component is input into the assembly function. The value of the resultant assembly variable is determined and compared to the specified assembly limits. This process is then repeated many times and the count of rejected assemblies is divided by the total trials to estimate the fraction of rejected assemblies. In tolerance analysis, the permissible rejection fraction is usually quite small and large samples on the order of 10,000 or 100,000 are required to give accurate predictions of rejects. These large samples require significant computation time which is not practical at present on personal computers. The computer programs and algorithms to generate the random dimensions are not long, although caution must be exercised to use a non-repeating sequence for the base random number generator.

The Method of Moments uses the statistical moments of the component distributions and the first and second partial derivatives of the assembly function to find the first four moments of the assembly distribution. These four moments are used to find the parameters of a general distribution such as the Pearson system, the Johnson System or the Lambda distribution. With the parameters of a distribution determined, the fraction outside of the assembly limits is found from tables, numerical integration or in some cases by algebraic equations. A computer program to do tolerance analysis by the Method of Moments will be quite long and complex due to the need for numerical derivatives in most cases and the many series summations to get the assembly moments. However, once the program is complete, the computation time is moderate compared to Monte Carlo Simulation.

An alternate scheme which requires a moderately complicated program with moderate computation time is a blend of the Method of Moments and Monte Carlo Simulation. This is the Hybrid method and uses Monte Carlo Simulation to generate a smaller number of assembly values. The sample size is usually on the order of 1000 to 5000. The resultant assembly dimensions are used to calculate the statistical moments of the assembly distribution and then estimate the fraction of rejects. Most of the complexity of the Method of Moments is eliminated since numerical derivatives and the series summations are not needed to find the assembly moments from the component moments. Since the sample size can be on the order of 1000, the computation is greatly reduced from the simple Monte Carlo simulation.

In the following examples, the advanced statistical methods are applied to the shaft and housing assembly previously shown in figure 3. To illustrate the non-Normal capability, the component distributions are assumed to be Uniform rather than Normal. Trial values must be selected for the unassigned tolerances in order to perform the analysis. The rejection fraction or the fraction of assemblies outside the limits is estimated for each case.

Arbitrarily let TB =TE = .0060, TD = TF = .0020 in all of the following examples.

The specified assembly clearance is: .020 +/-.015

which gives assembly limits of: max. = .035, min. = .005

a. Method of Moments

The resulting assembly moments yield the following statistics:

Mean= .020, Standard Deviation = .00562, Skewness= 0.0, Kurtosis= 2.639

The Kurtosis of the sum of Uniform distributions is seen to be approaching 3.0, which is the value for a Normal distribution, as predicted by statistical theory.

Two different empirical distributions are fit to the values which give the following estimates of the number of rejected assemblies.

Table 5. Sample Results for Method of Moments

 
Rejects/1000
y > .035

y < .005

Johnson Bounded 1.44 1.44
Lambda 1.40 1.40

 

b. Monte Carlo Simulation

The random number generators require a starting or seed value. Different seeds and varying sample size will give different values for the assembly rejects.

Table 6. Sample Results for Monte Carlo Simulation

Sample Size Seed Rejects/1000
y > .035

y < .005

100,000 84602 1.54 1.27
100,000 17805 1.54 1.50
1,000 1136 2 1
1,000 12091 2 3
1,000 12178 0 0

 

The large sample simulations give rejection fractions that agree well with the Method of Moments. The small sample simulations do not agree as well and vary widely.

c. Hybrid

The Monte Carlo simulations of part (b) above were used to estimate the first four assembly moments. These were fitted to the Lambda distribution to yield the fraction of rejects.

Table 7. Sample Results for the Hybrid Method

System Moments:

I II III IV Rejects/1000
Sample Mean St.Dev Skewne Kurtosis y > .035

y < .005

100,000 .02001 .005633 -.00275 2.630 1.49 1.48
100,000 .01998 .005614 -.00058 2.635 1.42 1.48
1,000 .01976 .005586 .05162 2.637 1.22 1.48
1,000 .01985 .005635 -.11791 2.643 0.99 2.55
1,000 .01992 .005493 -.03680 2.575 0.68 0.89

The various simulations give different estimates of the first four assembly moments and rejection fraction. The wide variation in the predicted rejects indicates that the accuracy of the Hybrid method also depends on sample size.

9. Design Limitations of the Advanced Methods

The following factors appear to seriously limit the application of the advanced statistical tolerance methods as design tools:

  1. In the early design stages, little information is available on distribution type. Even for standard items, large variations between different batches are frequently encountered. Quality control records of previous production usually just provide the percent yield or mean and standard deviation and not distribution shape.

  2. Advanced methods are computation-intensive, causing great difficulty when doing allocation or synthesis work rather than just analysis. In the example problem, tolerances for each component were selected and the fraction of rejects was calculated. If several components were unassigned and a permissible rejection fraction were to be matched, an iterative application of the advanced method would be needed. If very many design tolerances were to be determined, a multi-dimensional search in design space would result in many design iterations.

  3. Finally, the quality control methods, which are still struggling to be accepted and utilized in this country by many industries, are based on the Normal distribution. These techniques do not utilize information on the third and fourth moments because of the large sample size required. Even if distribution shape were well-known by design, and this information were used to refine component tolerances, the existing quality control techniques may not be sufficiently robust to predict out-of-control conditions if only higher moments are changing.

Therefore, the advanced methods do not appear suited for initial design at present, but may be applied during production as sufficient component data becomes available.

10. Conclusion -- Design Issues

The important design issues in mechanical tolerance analysis have been briefly described and identified. The most important issue in mechanical tolerance analysis is to recognize that design and manufacturing must become allies in producing competitive products to ensure survival in international markets. This may appear trite, but the authors' experience in practice and consulting indicates this is a major problem in many companies. The conflict that arises between design and manufacturing illustrates the need to communicate effectively.

The poor communication between design and manufacturing underscores the principal technical issue in mechanical tolerance analysis: the lack of a common definition of a tolerance. Debates over worst limit and simple statistical tolerance models creates conflict between design and manufacturing. The Estimated Mean Shift is proposed as an enhanced model for tolerance analysis that is a step toward a common definition of tolerance. Its use by both engineering and manufacturing will permit communication of the needs of each in terms of currently available quality parameters. It makes assumptions which are plausible to designers and which can be monitored in production. This new method must be proven in practice and modified or superseded as needed to obtain a single realistic meaning of tolerance to both designers and manufacturers.

The allocation or synthesis of tolerances in a design is a difficult matter and results in many iterations between design and manufacturing. The Proportional Scaling and Precision Factor Methods are simple, but crude guides. Cost optimization is very desirable, but is at present a subject of academic pursuit since cost-vs.-tolerance curves are almost universally unavailable.

In order to make advanced tolerance analysis and optimization methods available for tolerance allocation by designers, we must use quality control techniques to determine process capabilities, track costs and then ultimately model machines and processes through design of experiments. This is an enormous task which will require cooperation across company and even industry boundaries.

11. Acknowledgment

This study was sponsored by the Association for the Development of Computer-Aided Tolerancing Software, which is a joint effort by Brigham Young University and ten industrial and government sponsors.

References

1. Fortini, E.T., Dimensioning for Interchangeable Manufacture, Industrial Press, 1967.

2. Trucks, H.E., Designing for Economical Production, 2nd ed., Society of Manufacturing Engineers, Dearborn, Michigan, 1987.

3. Spotts, M.F., "Allocation of Tolerances to Minimize Cost of Assembly," Journal of Engineering for Industry, Transactions of the ASME, Vol. 95, August 1973, pp. 762-764.

4. Chase, K.W. and W.H. Greenwood, "Computer-Aided Tolerance Selection: CATS User Guide," ADCATS Report No. 86-2, Brigham Young University, May 1986.

5. Speckhart, F.H., "Calculation of Tolerance Based on a Minimum Cost Approach," Journal of Engineering for Industry, Transactions of ASME, Vol. 94, May 1972, pp. 447-453.

6. Jamieson, A., Introduction to Quality Control, Reston Publishing, 1982.

7. Greenwood, W.H., and K.W. Chase, "A New Tolerance Analysis Method for Designers and Manufacturers," Journal of Engineering for Industry, Transactions of ASME, Vol. 109, May 1987, pp. 112-116.

8. Shapiro, S.S. and Gross, A.J., Statistical Modeling Techniques, Marcel Dekker, Inc., 1981.

9. Evans, D.H., "Statistical Tolerancing: The State of the Art, Part 1," Journal of Quality Technology, Vol. 7, No. 1, pp. 1-12, January 1975.

10. Evans, D.H., "Statistical Tolerancing: The State of the Art Part, 2," Journal of Quality Technology, Vol. 7, No. 1, pp. 1-12, January 1975.

11. Hahn, G.J. and Shapiro, S.S., Statistical Models in Engineering, John Wiley & Sons, Inc., 1967.

12. Greenwood, W. H., "A New Tolerance Analysis Method for Engineering Design and Manufacturing", Ph.D. dissertation, Mechanical Engineering Department, Brigham Young University, March, 1987.

 


Table of Contents

Top of the Page
Abstract
1 Manufacturing Considerations in Engineering Design
2 Tolerance Analysis vs. Tolerance Allocation
3 Common Engineering Models for Assembly Tolerances

a. Worst case.
b. Statistical.

4 Tolerance Allocation Methods

a. Allocation By Proportional Scaling.
b. Allocation By Constant Precision Factor

5 Tolerance Allocation Using Optimization Techniques

a. Cost-vs.-Tolerance Functions.
b. Tolerance Allocation by Lagrange Multipliers

6 Limitations of Common Assembly Models
7
Estimated Mean Shift Model
8 Advanced Statistical Tolerance Analysis Models

a. Method of Moments
b. Monte Carlo Simulation
c. Hybrid

9. Design Limitations of the Advanced Methods
10. Conclusion -- Design Issues
11. Acknowledgment
References

 

The ADCATS site: Home | ADCATS Info | Search | Site Map | Bulletin Board | Reports & Publications | Bibliography | Contact Us

original report by Kenneth W. Chase & William H. Greenwood.