Highlight, copy & paste to cite:

Williams, S. (1998). A Meta-Analysis of the Relationship between Organizational Punishment and Employee Performance/Satisfaction, Research and Practice in Human Resource Management, 6(1), 51-64.

A Meta-Analysis of the Relationship between Organizational Punishment and Employee Performance/Satisfaction

Steve Williams


A meta analysis found that the mean correlation between the application of organizational punishment andsubsequent employee performance was slightly positive (.032), although the actual population relationship could be anywhere between -.54 and .60 (95% CI). The mean correlation between organizational punishment and subsequent employee satisfaction was positive (.140), although the actual relationship in the population could fall between -.57 and .81 (95% CI). The results indicate that the disparate findings reported within the management literature are most likely due to small sample variance. Differences in residual variance indicated the existence of moderators. Of the moderators tested, only the use of questionnaires over experimental manipulation resulted in significantly higher reported relationships for punishment-satisfaction.


Managers are often faced with the difficult task of changing undesirable employee behavior. Problems like absenteeism, lateness, and drug or alcohol usage cost corporations millions of dollars annually (Morin & Yorks, 1990), and a survey of 100 companies (Miner & Brewer, 1976) indicated that 83% used discipline or the threat of discipline in response to these as well as other inappropriate employee behaviors. While some management theorists denounce the use of organizational punishment due to possible unwanted side effects such as employee aggression or withdrawal, discipline by managers is commonly used in some form by most organizations (Katz & Kahn, 1978). Butterfield, Trevino, and Ball (1996) have indicated that the types and styles of discipline used by managers vary widely, most likely due to the uncertainty associated with whether punishment will produce desirable results. Managers who routinely administer punishment, as well as organizational theorists, are concerned about the effectiveness of organizational discipline.

While some inappropriate employee behaviors within organizations may go undisciplined, managers believe that punishment is necessary when undesirable actions (which could include a wide range of specific and contingent employee behaviors) have an adverse effect on job performance (Podsakoff, 1982). That is, any unwanted subordinate behavior which has a direct impact on task performance is a likely target for organizational discipline, with subsequent employee performance the primary measure used to gauge the effectiveness of a supervisor’s disciplinary attempt (Ball, Trevino, & Sims; 1994). However, many if not most managers dislike administering required punishment due to the strong emotional reaction the punished individual is expected to display as well as due to the potential impact punishment may have on employee attitudes such as job satisfaction (Arvey & Ivancevich, 1980). Given their importance to both organizations and theorists, it is not surprising that one or both of these outcomes (i.e., performance and satisfaction) have been the focus of the majority of organizational discipline research (Arvey & Jones, 1985).

Ball, Trevino, and Sims (1994) have noted that organizational punishment research has led to indecisive results and often contradictory conclusions (cf. Arvey & Ivancevich, 1980; Arvey & Jones, 1985; Sims, 1980). For example, Church (1963: 369) concluded that “considerable uncertainty remains today regarding the effect of punishment and there does not appear to be any single reliable effect.” Sims (1980: 136) argued that “punitive behavior is not likely to be effective as an overall pattern of managerial behavior for influencing employees.” On the other hand, Johnston (1972: 1051) concluded that no other procedure “provides an effect which is as immediate, enduring, or generally effective as that produced by the proper use of punishment procedures.” Arvey & Ivancevich (1980: 131) suggested that “punishment may be a very effective procedure in accomplishing behavior change.’

Many managers and a number of organizational researchers express concern about administering discipline due to doubt about its effectiveness in actually improving performance and due to uncertainty about whether employee satisfaction will be detrimentally influenced. Empirical findings involving either or both of these dependent variables appear to be conflicting and inconclusive. As outlined above, qualitative organizational punishment reviews reach contradictory conclusions about the effectiveness of discipline — some authors recommend the continued application of discipline to bring about desired organizational results while others argue that punishment is ineffective and should rarely or never be used. This paper is an attempt to uncover what is currently known about the relationship between the use of organizational punishment (i.e., discipline) and its impact upon employee performance and satisfaction, the two most widely used outcome variables measured by punishment researchers. The present study used meta-analysis, which combines results from existing separate smaller studies (Hunter, Schmidt, & Jackson, 1982), to reveal a more accurate picture of the relationship between organizational punishment and employee performance and satisfaction. Hunter and Schmidt (1990) have suggested that a wide range of effect sizes (that is, variation in the strength of the studied relationship) reported in published studies may be due to sampling error, and meta-analytic procedures are useful in uncovering a more accurate assessment of the strength of true relationships. The specific questions this study hopes to address are 1) whether a correlation exists between punishment and performance and satisfaction, 2) whether correlations are positive or negative, 3) if differences among findings can be attributed to statistical artifacts, and 4) if potential moderators explain any resulting variances.

Although the debate continues as to how variables like organizational performance and employee satisfaction should be measured, a review of the literature found that performance was generally operationalized as the perception of the subject’s immediate supervisor, with a few studies using some type of quantifiable output (see Table 1). Satisfaction, when measured, was usually operationalized by fairly reliable instruments like the Job Descriptive Index (JDI). The most widely used definition of the independent variable, punishment, is offered by Kazdin (1975: 33-34): “Punishment is the presentation of an aversive event or the removal of a positive event following a response which decreased the probability of that response.” Generally, studies operationalized punishment as organizational discipline; that is, punishment was a formal attempt to correct and control the behavior of organizational members.



Hunter, Schmidt, and Jackson’s (1982) and Hunter & Schmidt’s (1990) meta-analytic procedures were used to correct for sampling error and reliability attenuation caused by measurement error. Hunter et al. (1982) also suggest correcting for range restriction, but this step was not performed due to a lack of adequate distributions across studies. The corrected average correlation and variance were then used to determine if variation among studies was due to statistical artifacts or moderator variables.


Studies used in this meta-analysis were primarily derived from the bibliographies of comprehensive reviews, journal articles, and published proceedings. Appropriate studies were also acquired through Psychological Abstracts and by a visual inspection of relevant journals. Unfortunately, several salient studies could not be included in the meta-analysis due to a lack of the necessary statistical data (e.g., Banks, 1976; Cherrington, Reitz, & Scott, 1971; Gary, 1971; Jones, Tait, & Butler, 1983; Kipnis, Silverman, & Copeland, 1973; Maier & Danielson, 1956; Trenholme & Baron, 1975; Weinstein, 1969). Those studies with F or t statistics (e.g., Frakes, 1971, Greene & Podsakoff, 1978, Schnake, 1986) were converted to point-biserial correlations using Rosenthal’s (1987: 141) transformation.

Table 1
Mate-Analysis Coding
Study N* Method
Measure(s) **
Measure(s) **
Satisfaction Reported
Ind. Variable
Study N* Method
Measure(s) **
Measure(s) **
Satisfaction Reported
Ind. Variable
Katz, et al. (1951)
72 field survey quantity interviews perf neg./sat .00 Not
Argyle, et al. (1958) 90 field experiment quantity turnover/absenteeism perf neg./sat .00 N.R.
Day & Hamblin (1964) 48 experiment quantity 4-item questionnaire perf -.36./sat -.26 N.R.
Frakes (1971) 17 experiment none 7-item questionnaire (feelings) sat -.65 N.R.
Day (1971) 86 experiment quantity measured aggression perf .00 .96
Reitz (1971) (44)
field survey self-perception 4- and 6-item questionnaire measuring general and job satisfaction perf .18./sat .40
perf .04/sat .07
perf -.24/sat .07
perf -.10/sat .04
perf -.02/sat .10
perf .10/sat .20
perf .19/sat .00
Sims & Szilagyi (1975) (53)
field survey supervisor perception (.94) JDI perf -.34./sat .24
perf -.09/sat .05
Keller & Szilagyi (1976) 132 field survey none JDI sat -.28 .88
Oldham (1976) (45)
field survey supervisor perception none perf -.10
perf -.17
Brass & Oldham (1978) 71 experiment sup. perception none perf .34 .30
Franke & Karl (1978) 5 field experiment quantity none perf .75 N.R.
Greene & Podsakoff (1978) (456)
field experiment supervisor perception (.82) DI (.79) perf -.30/sat .01
perf -.11/sat -.22
O’Reilly & Weitz (1980) 113 field survey quantity. & sup. perception none perf .35 N.R.
Szilagyi (1980) 128 field survey quantity JDI perf -.13/sat -.37 .92
Strasser, et al. (1981) 323 field survey none JDI (.68) sat .16 .64
Podsakoff. et al. (1982) 72 field survey sup. perc. (.93) JDI (.75) perf -.16/sat .05 .84
Podsakoff, et al. (1984) (1735)
field survey supervisor perception (.93) JDI perf -.10/sat .06 .84
Beyer & Trice (1984) 474 field survey sup. perc. (.91) none perf .62 N.R.
Arvey. et al. (1984) 526 field survey none JDS (.79) sat .78 N.R.
Podsakoff & Todor (1985) 827 field survey self-perc. (.87) none perf .23 .84
Schnake (1986) 48 experiment quantity 9-item questionnaire perf .48/sat .00 N.R.

* Numbers in parentheses indicate separate study populations.
** Numbers in parentheses are reported reliabilities for the dependent variables.

Sample Coding

The literature search uncovered 21 studies, published between 1951 and 1986, which reported usable data on either or both of the dependent variables. Each study was coded for 1) sample size, 2) method used (i.e., experiment or survey), 3) performance measures (i.e., physical quantity or self/supervisor perception), 4) satisfaction measures (i.e., type of instrument used), 5) reported correlations between variables, and 6) reliabilities of both the independent variable (organizational punishment) and the dependent variables (employee performance and satisfaction). In addition, studies with multiple independent samples were coded as separate studies (Hunter et al., 1982; Hunter & Schmidt, 1990), and conceptual replications reporting more than one testing method were cumulated using an average weighted measure (Rosenthal, 1984). The resulting 33 independent samples and their correlation coefficients are summarized in Table 1. The reliabilities ranged from .867 for satisfaction and .972 for performance, while the reliability for punishment was .86. (Since performance quantity was a quantifiable measure, its reliability was assumed to be 1.0 for this study).


Effect Sizes

For the 26 correlations between punishment and employee performance, the mean correlation was .032 with a 95% confidence interval of -.54 and .60, indicating that the population correlation is not significantly different from zero. For the 24 correlations between punishment and organizational member satisfaction, the mean correlation was slightly positive (.14) with a 95% confidence interval of -.53 and .81, again indicating that the population correlation is not significantly different from zero. As Table 2 shows, the results of the meta-analysis indicate that the true population correlations could be zero. The large population standard deviations of .29 and .34 respectively “give a correct picture of the extent of uncertainty that surrounds results computed from small-sample studies” (Hunter et al., 1982: 24). In other words, this analysis indicates that all but two of the reported studies (e.g., Franke & Karl, 1978; Beyer & Trice, 1984) concerned with punishment and performance are not contradictory. (Both studies found that punishment has a significantly positive effect upon performance: .75 and .62, respectively.) With punishment and satisfaction, only one study (Frakes, 1971) falls outside the established confidence interval. (That study reported a negative effect of punishment upon satisfaction with a correlation of -.65; one other study (Arvey, Davis, & Nelson, 1984) approached the upper bound with a positive .78.)

Table 2
Meta-Analysis Results for Correlations
Subgroup Total
K* a** b** Mean
Performance 5324 26 .865 (9) .972 (11) .032 .29 .57 93%
Satisfaction 4900 24 .865 (9) .867 (4) .140 .34 .67 93%

* Number of correlations included in analysis.
** Reliability of the independent and dependent variable respectively. Figures in parentheses indicate the number of studies involved in calculations.
*** Corrected for sampling error and reliability attenuation.

Moderator Variables

According to Hunter et al. (1982), a large amount of unexplained variance among studies suggests the presence of potential moderator variables, and Hunter and Schmidt (1990) argue that when “residual” variance in effect sizes across studies is great, strong evidence exists for the existence of moderator variables. That is, if a large proportion of variance remains unexplained after correcting for statistical artifacts, then differences in correlations across studies may be due to one or more moderator variables. Consequently, since the ratio of unexplained variance was greater than 25% for both dependent variables (93% for both performance and satisfaction), a post hoc attempt was made to uncover potential moderators.

While a wide number of moderators could potentially influence the relationship between punishment and the outcome variables, the limited data reported in published studies restricted potential moderators to the method used to acquire data (quantity versus perceptual methods and survey versus experimental methods) and the source of outcome variable information (supervisor versus self). Table 3 breaks down the total sample for performance to see if different measurement methods can account for observed variance. When employee performance is measured by objective output (i.e., quantity), the correlation mean is just slightly positive at .088, and unexplained variance decreases to 82%. If performance is determined by questionnaires which measure perceptions (either supervisor or self), the correlation mean is .028, which has no effect upon the amount of variance explained.

Table 3
Subgroup Meta-Analysis of Moderator Effects: Performance
Subgroup Total
K* Mean
Total 5324 26 .032 .29 .57 93%
Quantity 428 6 .088 .29 .57 82%
Perception 4896 20 .028 .30 .59 94%
Survey 4573 19 .064 .29 .57 93%
Experiment 751 7 -.016 .29 .57 88%
Supervisor 3557 12 -.030 .29 .57 94%
Self 1339 8 .175 .27 .53 90%

* Number of correlations included in analysis.
** Corrected for sampling error and reliability attenuation.

When performance is grouped by either survey or experimental manipulation, the mean correlations are .064 and -.016, respectively. Unexplained variance remains the same for survey methods, and it drops slightly to 88% for experimental methods.

Grouping performance by supervisor or self-perception questionnaires increases the mean correlation to .175 for self-perceptions, which is the largest mean correlation uncovered between punishment and performance. However, unexplained variance remains high at 94% and 90%, respectively.

Table 4
Subgroup Meta-Analysis of Moderator Effects: Satisfaction
Subgroup Total
K* Mean
Total 4900 24 .140 .34 .67 93%
Field 4204 18 .149 .30 .59 94%
Experiment 696 6 .016 .10 .20 47%
JDI 3587 11 .029 .18 .35 85%
Non-JDI 1313 13 .386 .43 .84 95%

* Number of correlations included in analysis.
** Corrected for sampling error and reliability attenuation.

Table 4 breaks down the total sample for satisfaction to see if different measurement methods can account for observed variance. When satisfaction was measured during experiments, the mean correlation was near zero (.016), but the amount of unexplained variance dropped to 47%. When satisfaction was measured during field surveys, the mean correlation between punishment and satisfaction was almost .13, but unexplained variance remained high at 94%.

Eleven of the reported samples used the JDI to measure employee satisfaction, while 13 samples used some other instrument. It is interesting to note that the JDI had a mean correlation of just .029 (although unexplained variance dropped to 85%). However non-JDI measurements resulted in a positive mean correlation of nearly .40, the largest correlation uncovered between punishment and satisfaction, although unexplained variance remained unchanged.

Although initial analyses indicate that some inconsistencies across reported studies can be explained by moderator variables, the remaining residual variance after subgrouping by moderators points toward the existence of additional moderators.


In summary, this meta-analysis using 26 correlations for punishment-performance and 24 correlations for punishment-satisfaction derived from 33 samples failed to substantiate any significant relationship among independent and dependent variables. The presence of 93% unexplained variance for both dependent variables suggested that moderators do exist; however, the only large drop in unexplained variance occurred when the punishment-satisfaction relationship was tested after grouping by experimental manipulation (i.e., 47%).

This meta-analysis found that the mean correlation between the application of organizational punishment and subsequent employee performance was slightly positive (.032), but the actual population correlation could be anywhere between -.54 and .60 (95% CI). The mean correlation between organizational punishment and subsequent employee satisfaction was positive (.140), but the actual relationship in the population could fall between -.57 and .81 (95% CI). The wide range of relationships reported within the management literature is most likely due to variance associated with researchers reporting results from many small samples (Hunter & Schmidt, 1990). Meta-analytic results indicate that the ‘true’ effect of punishment on employee performance could be anywhere between -.54 and .60 (that is, punishment may have either a negative, neutral, or positive effect on performance), while the ‘true’ effect of discipline on employee satisfaction could fall anywhere between -.57 and .81 (again, punishment may have a negative, neutral, or positive effect on satisfaction). This wide variation in effect sizes is most likely due to sampling error across studies (Hunter & Schmidt, 1990).

There appear to be no conflicting results in the literature, with all but three of the 33 reported sample relationships falling within the 95% confidence interval for both dependent variables (and those three studies may have produced outlying results due to techniques and/or instrumentation specific to that research). An actual population correlation of zero for either performance or satisfaction cannot be ruled out. On the other hand, the true amount of explained variance between punishment and performance could be as high as .36, and the amount of explained variance between punishment and satisfaction could be as high as .66. While managers may intuitively believe that discipline is effective, according to this meta-analysis we simply do not know whether organizational punishment significantly affects employee performance or satisfaction. Either side of the effectiveness of organizational punishment debate could be correct: discipline may produce desirable results, at least as far as employee performance and satisfaction are concerned, or punishment may have a severely detrimental impact on performance and attitudes.

Even after correcting for statistical artifacts, substantial amounts of unexplained variance remained. Of the moderators tested, only the relationship between satisfaction and the use of experiments showed any meaningful effects. That is, the mean correlation for those samples using experimental manipulation (e.g., .016) was lower than for samples using field surveys (e.g., .149), and variance dropped substantially. One possible explanation is that the greater control offered by experimentation leads to less variance than when survey instruments are applied in field settings. The large amounts of unexplained variance suggest directions for further efforts.

It is interesting to note that the .386 mean correlation for non-JDI measurements of satisfaction would have been higher if reliabilities for the instruments used had been reported. For example, if the reliability had been equivalent to that of the JDI (.867), the mean correlation would have been at least .46, and possibly much higher, while the standard deviation (.43) would have been unaffected.

The results of this meta-analysis reveal that we simply do not know whether organizational discipline is related to employee performance and satisfaction. Additional research in the area of organizational punishment needs to determine the “true” population correlation between punishment and employee outcomes. While this meta-analysis suggests that current findings can all be explained by sampling error due to small-sample variance, future studies should attempt to determine whether punishment truly has a positive or negative effect upon employee performance and satisfaction by conducting research involving a greater number of participants. The standard deviation for the population needs to be tightened which calls for studies using very large sample sizes. This meta-analysis also demonstrates how the quality, breadth, and depth of contributing studies limits the types of analyses which can be performed and the inferences which can be drawn; the reporting practices of extant discipline research hindered potential analysis due to restricted and insufficient information. In addition, the potential moderators suggested by the large amounts of unexplained variance need to be uncovered. It appears that potentially powerful moderators are influencing the relationship between punishment and the dependent variables, and additional research is needed to uncover these. It is apparent that a programmatic research agenda is required if organizational theorists and practicing managers are ever to uncover the actual relationship between discipline and employee outcomes.

Studies Included in the Meta-Analysis

Argyle, M., Gardner, G., & Cioffi, F. (1958) Supervisory methods related to productivity, absenteeism, and labour turnover. Human Relations, 11, 23-40

Arvey, R.D., Davis, GA, & Nelson, S.M. (1984) Use of discipline than organization: A field study. Journal of Applied Psycholpgy, 69, 448-460.

Beyer, J.M. & Trice, H.M. (1984) A field study of the use and perceived effects of discipline in controlling work performance. Academy of Management Journal, 27, 743-764.

Brass, D.J. & Oldham, G.R. (1976) Validating an in-basket test using an alternative set of leadership scoring dimensions. Journal of Applied Psychology, 61, 652-657.

Day, R.C. (1971) Some effects of combining close, punitive, and supportive styles of supervision. Sociometry, 34,303-327.

Day, R.C. & Hamblin, R.L. (1964) Some effects of close and punitive styles of supervision. American Journal of Sociology, 69, 499-510.

Frakes, V.F. (1971) Acquisition of dlslildng for persons associatedwith punishment. Perceptual and Motor Skills, 33, 251-255.

Franke, R.H. & Karl, J.D. (1978) The Hawthorne experiments: First statistical interpretation. American Sociological Review, 43, 623-643.

Greene, C.N. & Podsakoff, P.M. (1981) Effects of withdrawal of a performance- contingent reward on supervisory influence and power. Academy of Management Journal, 24, 527-542.

Katz, D., MaccoW, N., Gurin, G., & Floor, L. (1951) Productivity, supervision and morale among railroad workers. University of Michigan.

Keller, R.T. & Szilagyi A.D. (1978) A longitudinal study of leader reward behavior, subordinate expectancies, and satisfaction. Personnel Psychology, 31, 119-129.

Oldham, G.R. (1976)The motivational strategies usedbysupervisors Relationships to effectiveness indicators. Organizational Behavior and Human Performance, 15, 66-87.

O’Reilly, III, CA & Weltz, BA (1980) Managing marginal employees: The use of warnings and dismissals. Administrative Science Quarterly, September, 467-484.

Podsakoff, P.M. & Todor, WD. (1985) Relationships between leader reward and punishment behavior and group processes and productivity. Journal of Management, 11, 55-73.

Podsakoff, P.M., Todor, W.D., Grover, R.A., & Huber, V.L. (1984) Situational moderators of leader reward and punishment behaviors: Fact or fiction? Organizational Behavior and Human Performance, 34, 21-63.

Podsakoff, P.M., Todor, W.D., & Skov, R. (1982) Effects ofleader contingent and noncontingent reward and punishment behaviors on subordinate performance and satisfaction. Academy of Management Journal, 25, 810-821.

Reitz, H.J. (1971) Managerial attitudes and perceived contingencies between performance and organizational response. Proceedings ofthe 31st Annual Meeting of the Academy of Management, 227-238.

Strasser, S., Dailey, R.C., & Batenian, T.S. (1981)Attitudinal moderators and effects of leaders’ punitive behavior. Psychological Reports, 49, 695-698.

Schnake, M.E. (1986) Vicarious punishment in a work setting. Journal of Applied Psychology, 7, 343-345.

Sims, H.P. & Szilagyi, A.D. (1975) Leader reward behavior and subordinate satisfaction and performance. Organizational Behavior and Human Performance, 14, 426-438.

Szilagyi, A.D. (1980) Causal inferences between leader reward behaviour and subordinate performance, absenteeism, and work satisfaction. Journal of Occupational Psychology, 58, 195-204.


Arvey, R.D. & Ivancevich, J.M. (1980) Punishment in organizations: A review, propositions. and research suggestions. Academy of Management Review, 5, 123-132.

Arvey, R.D. & Jones, A.P. (1985) The use of discipline in organizational settings: A framework for future research. Research in Organizational Behavior, 7, 367-408.

Ball, G.A., Trevino, L.K., and Sims, Jr, H.P. (1994) Just and unjust punishment: Influences on subordinate performance and citizenship. Academy of Management Journal, 37, 299-322.

Banks, W.C. (1976) The effects of perceived similarity upon the use of reward and punishment. Journal of Experimental Social Psychology, 12, 131-138.

Butterfield, K.D., Trevino, L.K., and Ball, G.A. (1996) Punishment from the manager’s perspective: A grounded investigation and inductive model. Academy of Management Journal, 39, 1479-1512.

Cherrington, D.J., Reitz, H.J., & Scott, W.E. (1971) Effects of contingent and noncontingent reward on the relationship between satisfaction and task performance. Journal of Applied Psychology, 55, 531-536.

Church, R.M. (1963) The varied effects of punishment on behavior. Psychological Review, 70, 369-399.

Cohen, J. (1977) Statistical power analysis for the behavioral sciences. New York: Academic Press.

Gary, A.L. (1971) Industrial absenteeism: An evaluation of three methods of treatment. Personnel Journal, May, 352-353.

Greene, C.N. & Podsakoff, P.M. (1981) Effects of withdrawal of a performance-contingent reward on supervisory influence and power. Academy of Management Journal, 24,527-542.

Hunter, J.E. and Schmidt, F.L. (1990) Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: Sage.

Hunter, J.E., Schmidt, F.L., & Jackson, G.B. (1982) Meta-analysis: Cumulating research findings across studies. Beverly Hills, CA: Sage Publications

Johnston, J.M. (1972) Punishment of human behavior. American Psychologist, November, 1033-1051.

Jones, A.P., Tait, M., & Butler, M.C. (1983) Perceived punishment and reward values of supervisor actions. Motivation and Emotion, 7, 313-329.

Kazdin, A.E. (1975) Behavior modification in applied settings, Homewood, Ill: Dorse.

Katz, D. and Kahn, R.L. (1978) The social psychology of organizations (2nd ed.). New York: Wiley.

Kipnis, D., Silverman, A., & Copeland, C. (1973) Effects of emotional arousal on the use of supervised coercion with black and union employees. Journal of Applied Psychology. 57, 38-43.

Maler, N.R.F. & Danielson, L.E. (1956) An evaluation of two approaches to discipline in Industry. Journal of Applied Psychology, 40, 319-323.

Miner, J.B. and Brewer, J.F. (1976) The management of ineffective performance. In M.D. Dunnette (Ed.), The handbook of industrial/organizational psychology (pp. 995-1029) Chicago: Rand McNally.

Morin, W.J. and Yorlcs, L. (1990) Dismissal. New York: Drake Beam, Morin.

Podsakoff, P.M. (1982) Determinants of a supervisor’s use of rewards and punishments: A literature review and suggestions for further research. Organizational Behavior and Human Performance, 29, 58-83.

Rosenthal, R. (1984) Meta-analytic procedures for social research. Beverly Hills, CA: Sage Publications.

Rosenthal, R. (1987) Judgment studies: Design, analysis, and meta-analysis. New YorK: Cambridge University Press.

Sims, H.P. (1980) Further thoughts on punishment in organizations. Academy of Management Review, 5, 133-138.

Trenholme, I.A. & Baron, A. (1975) Immediate and delayed punishment of human behavior by loss of reinforcement. Learning and Motivation, 6, 62-79.

Weinstein, L. (1969) Decreased sensitivity to punishment. Psychonomic Science, 14, 264-266.