版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領
文檔簡介
1、Common Method Biases in Behavioral Research:A Critical Review of theLiterature and Recommended RemediesPhilip M. Podsakoff, Scott B. MacKenzie, andJeong-Yeon LeeIndiana UniversityNathan P. PodsakoffUniversity of FloridaInterest in the problem of method biases has a long history in the behavioral sci
2、ences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases
3、, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies fo
4、r different types of research settings.Most researchers agree that common method variance (i.e.,variance that is attributable to the measurement method rather than to the constructs the measures represent is a potential problem in behavioral research. In fact, discussions of the potential impact of
5、common method biases date back well over 40years (cf.Campbell &Fiske, 1959, and interest in this issue appears to have continued relatively unabated to the present day (cf.Bagozzi &Yi, 1990; Bagozzi, Yi, &Phillips, 1991; Campbell &OConnell,1982; Conway, 1998; Cote &Buckley, 1987,
6、 1988; Kline, Sulsky, &Rever-Moriyama, 2000; Lindell &Brandt, 2000; Lindell &Whit-ney, 2001; Millsap, 1990; Parker, 1999; Schmitt, Nason, Whitney, &Pulakos, 1995; Scullen, 1999; Williams &Anderson, 1994; Williams &Brown, 1994.Method biases are a problem because they are one o
7、f the main sources of measurement error. Measurement error threatens the validity of the conclusions about the relationships between mea-sures and is widely recognized to have both a random and a systematic component (cf.Bagozzi &Yi, 1991; Nunnally, 1978; Spector, 1987. Although both types of me
8、asurement error are problematic, systematic measurement error is a particularly serious problem because it provides an alternative explanation for the observed relationships between measures of different constructs that is independent of the one hypothesized. Bagozzi and Yi (1991noted that one of th
9、e main sources of systematic measurement error is method variance that may arise from a variety of sources:Method variance refers to variance that is attributable to the measure-ment method rather than to the construct of interest. The term method refers to the form of measurement at different level
10、s of abstraction,such as the content of specific items, scale type, response format, and the general context (Fiske,1982, pp. 8184.At a more abstract level, method effects might be interpreted in terms of response biases such as halo effects, social desirability, acquiescence, leniency effects, or y
11、ea-and nay-saying. (p.426However, regardless of its source, systematic error variance can have a serious confounding influence on empirical results, yielding potentially misleading conclusions (Campbell&Fiske, 1959. For example, letsassume that a researcher is interested in studying a hypothesiz
12、ed relationship between Constructs A and B. Based on theoretical considerations, one would expect that the measures of Construct A would be correlated with measures of Construct B. However, if the measures of Construct A and the measures of Construct B also share common methods, those methods may ex
13、ert a systematic effect on the observed correlation between the mea-sures. Thus, at least partially, common method biases pose a rival explanation for the correlation observed between the measures. Within the above context, the purpose of this research is to (aexamine the extent to which method bias
14、es influence behavioral research results, (bidentify potential sources of method biases, (cdiscuss the cognitive processes through which method biases in-fluence responses to measures, (devaluate the many different procedural and statistical techniques that can be used to control method biases, and
15、(eprovide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings. This is important because, to our knowledge, there is no comprehensive discussion of all of these issues avail-able in the literature, and the evidence suggests that
16、many re-searchers are not effectively controlling for this source of bias.Extent of the Bias Caused by Common Method VarianceOver the past few decades, a considerable amount of evidence has accumulated regarding the extent to which method variance influences (ameasures used in the field and (brelati
17、onships between these measures. Much of the evidence of the extent to which method variance is present in measures used in behavioral research comes from meta-analyses of multitraitmultimethodPhilip M. Podsakoff and Jeong-Yeon Lee, Department of Management, Indiana University; Scott B. MacKenzie, De
18、partment of Marketing, Indi-ana University; Nathan P. Podsakoff, Department of Management, Uni-versity of Florida.Journal of Applied Psychology Copyright 2003by the American Psychological Association, Inc.879studies (cf.Bagozzi &Yi, 1990; Cote &Buckley, 1987, 1988; Williams, Cote, &Buckl
19、ey, 1989. Perhaps the most comprehen-sive evidence comes from Cote and Buckley (1987,who exam-ined the amount of common method variance present in measures across 70MTMM studies in the psychology sociology, marketing, business, and education literatures. They found that approximately one quarter (26
20、.3%of the variance in a typical research measure might be due to systematic sources of measurement error like common method biases. However, they also found that the amount of variance attributable to method biases varied considerably by discipline and by the type of construct being investigated. Fo
21、r example, Cote and Buckley (1987found that, on average, method variance was lowest in the field of marketing (15.8%and highest in the field of education (30.5%.They also found that typical job performance measures contained an average of 22.5%method variance, whereas attitude measures contain an av
22、erage of 40.7%.A similar pattern of findings emerges from Williams et al. s (1989study of just the applied psychology literature.In addition to these estimates of the extent to which method variance is present in typical measures, there is also a growing body of research examining the extent to whic
23、h method variance influences relationships between measures (cf.Fuller, Patterson, Hester, &Stringer, 1996; Gerstner &Day, 1997; Lowe, Kroeck, &Sivasubramaniam, 1996; Podsakoff, MacKenzie, Paine, &Bach-rach, 2000; Wagner &Gooding, 1987. These studies contrasted the strength of th
24、e relationship between two variables when com-mon method variance was controlled versus when it was not. They found that, on average, the amount of variance accounted for when common method variance was present was approximately 35%versus approximately 11%when it was not present. Thus, there is a co
25、nsiderable amount of evidence that common method variance can have a substantial effect on observed relationships between measures of different constructs. However, it is important to rec-ognize that the findings suggest that the magnitude of the biasproduced by these method factors varies across re
26、search contexts (cf.Cote &Buckley, 1987; Crampton &Wagner, 1994.Not only can the strength of the bias vary but so can the direction of its effect. Method variance can either inflate or deflate observed relationships between constructs, thus leading to both Type I and Type II errors. This poi
27、nt is illustrated in Table 1, which uses Cote and Buckley s (1987estimates of the average amount of trait variance, the average amount of method variance, and the average method intercorrelations and inserts them into the equation below to calculate the impact of common method vari-ance on the obser
28、ved correlation between measures of different types of constructs (e.g.,attitude, personality, aptitude:R x , y (trueR ti , tj x y (trueR mk , ml x y ,(1where true R ti, tj is the average correlation between trait i and trait j , t x is the percent of trait variance in measure x , t y is the percent
29、 of trait variance in measure y , true R mk, ml is the average correlation between method k and method l , m x is the percent of method variance in measure x , and m y is the percent of method variance in measure y .For example, the correlation .52in the second row of the first column of Table 1was
30、calculated by multiplying the true corre-lation (1.00times the square root of Cote and Buckley s (1987estimate of the percent of trait variance typically found in attitude measures (times the square root of their estimate of the percent of trait variance typically found in personality measures (.391
31、 plus the average of their estimates of the typical correlation between methods for attitude (.556and personality (.546con-structs multiplied by the square root of their estimate of the percent of method variance typically found in attitude measures (times the square root of their estimate of the pe
32、rcent of method variance typically found in personality measures (.247.Table 1Relationship Between True and Observed Correlation for Average Measures by Type of ConstructTrue R ti, tj correlation (R ti, tj 2Type of Constructs.52(.27.35(.12.28(.08.21(.04.18(.03Attitude job performance and satisfactio
33、n .51(.26.32(.10.25(.06.17(.03.13(.02Personality personality .53(.28.33(.11.25(.06.17(.03.13(.02Personality aptitude.53(.28.34(.12.26(.07.18(.03.14(.02Personality job performance and satisfaction .53(.28.32(.10.23(.05.15(.02.10(.01Aptitude aptitude.54(.29.34(.12.26(.07.18(.03.14(.02Aptitude job perf
34、ormance and satisfaction.54(.29.32(.10.24(.06.15(.02.11(.01Job performance and satisfaction job performance and satisfaction.54(.29.31(.09.21(.04.12(.01.07(.00Note. Values within the table are the observed correlations R x, y (andsquared correlations R x, y 2 calculated using Cote and Buckley s (198
35、8formula shown in Equation 1of the text. For the calculations it is assumed that (athe trait variance is the same as that reported by Cote and Buckley (1987for each type of construct (e.g.,attitude measures .298, personality measures .391, aptitude measures .395, and job performance and satisfaction
36、 measures .465, (bthe method variance is the same as that reported by Cote and Buckley (1987for each type of construct (e.g.,attitude measures .407, personality measures .247, aptitude measures .251, and job performance and satisfaction measures .225, and (cthe correlation between the methods is the
37、 average of the method correlations reported by Cote and Buckley (1987for each of the constructs (e.g.,method correlations between attitude attitude constructs .556, personality attitude constructs .551, personality personality constructs .546, aptitude attitude constructs .564, aptitude personality
38、 constructs .559, aptitude aptitude constructs .572, job performance and satisfaction attitude constructs .442, job performance and satisfaction personality constructs .437, job performance and satisfaction aptitude constructs .450, and job performance and satisfaction job performance and satisfacti
39、on constructs .328. These calculations ignore potential Trait Method interactions.880PODSAKOFF, M AC KENZIE, LEE, AND PODSAKOFFThere are several important conclusions that can be drawn from Table 1. For example, the entry in the first column of the first row indicates that even though two attitude c
40、onstructs are perfectly correlated, the observed correlation between their measures is only .52because of measurement error. Similarly, the entry in the last column of the first row indicates that even though two attitude constructs are completely uncorrelated, the observed correlation between their
41、 measures is .23because of random and systematic measurement error. Both of these numbers are troubling but for different reasons. The entries in the entire first column are trou-bling because they show that even though two traits are perfectly correlated, typical levels of measurement error cut the
42、 observed correlation between their measures in half and the variance ex-plained by 70%.The last column of entries is troubling because it shows that even when two constructs are completely uncorrelated, measurement error causes the observed correlation between their measures to be greater than zero
43、. Indeed, some of these numbers are not very different from the effect sizes reported in the behav-ioral literature. In view of this, it is disturbing that most studies ignore measurement error entirely and that even many of the ones that do try to take random measurement error into account ignore s
44、ystematic measurement error. Thus, measurement error can in-flate or deflate the observed correlation between the measures, depending on the correlation between the methods. Indeed, as noted by Cote and Buckley (1988,method effects inflate the observed relationship when the correlation between the m
45、ethods is higher than the observed correlation between the measures with method effects removed and deflate the relationship when the correlation between the methods is lower than the observed cor-relation between the measures with method effects removed.Potential Sources of Common Method BiasesBeca
46、use common method biases can have potentially serious effects on research findings, it is important to understand their sources and when they are especially likely to be a problem. Therefore, in the next sections of the article, we identify several of the most likely causes of method bias and the re
47、search settings in which they are likely to pose particular problems. As shown in Table 2, some sources of common method biases result from the fact that the predictor and criterion variables are obtained from the same source or rater, whereas others are produced by the measure-ment items themselves
48、, the context of the items within the mea-surement instrument, and/orthe context in which the measures are obtained.Method Effects Produced by a Common Source or RaterSome methods effects result from the fact that the respondent providing the measure of the predictor and criterion variable is the sa
49、me person. This type of self-report bias may be said to result from any artifactual covariance between the predictor and criterion variable produced by the fact that the respondent providing the measure of these variables is the same.Consistency motif. There is a substantial amount of theory (cf.Hei
50、der, 1958; Osgood &Tannenbaum, 1955 and research (cf.McGuire, 1966 suggesting that people try to maintain consistency between their cognitions and attitudes. Thus, it should not be surprising that people responding to questions posed by research-ers would have a desire to appear consistent and r
51、ational in theirresponses and might search for similarities in the questions asked of them thereby producing relationships that would not other-wise exist at the same level in real-life settings. This tendency of respondents to try to maintain consistency in their responses to similar questions or t
52、o organize information in consistent ways is called the consistency motif (Johns,1994; Podsakoff &Organ, 1986; Schmitt, 1994 or the consistency effect (Salancik&Pfeffer, 1977 and is likely to be particularly problematic in those situa-tions in which respondents are asked to provide retrospec
53、tive accounts of their attitudes, perceptions, and/orbehaviors.Implicit theories and illusory correlations. Related to the no-tion of the consistency motif as a potential source of common method variance are illusory correlations (cf.Berman &Kenny, 1976; Chapman &Chapman, 1967, 1969; Smither
54、, Collins, &Buda, 1989, and implicit theories (cf.Lord, Binning, Rush, &Thomas, 1978; Phillips &Lord, 1986; Staw, 1975. Berman and Kenny (1976have indicated that illusory correlations result from the fact that “raters often appear to possess assumptions concern-ing the co-occurrence of r
55、ated items, and these assumptions may introduce systematic distortions when correlations are derived from the ratings ”(p.264; Smither et al. (1989have noted that these “illusory correlations may serve as the basis of job schema or implicit theories held by raters and thereby affect attention to and
56、 encoding of ratee behaviors as well as later recall ”(p.599. This suggests that correlations derived from ratees responses are com-posed of not only true relationships but also artifactual covariation based on ratees implicit theories.Indeed, there is a substantial amount of evidence that implicit
57、theories do have an effect on respondents ratings in a variety of different domains, including ratings of leader behavior (e.g.,Eden &Leviatin, 1975; Lord et al., 1978; Phillips &Lord, 1986, attributions of the causes of group performance (cf.Guzzo, Wag-ner, Maguire, Herr, &Hawley, 1986;
58、 Staw, 1975, and perceptions about the relationship between employee satisfaction and perfor-mance (Smitheret al., 1989. Taken together, these findings indi-cate that the relationships researchers observe between predictor and criterion variables on a questionnaire may not only reflect the actual co
59、variation that exists between these events but may also be the result of the implicit theories that respondents have regarding the relationship between these events.Social desirability. According to Crowne and Marlowe (1964,social desirability “refers to the need for social approval and acceptance and the belief that it can be attained by means of culturally acceptable and appropriate behaviors ”(p.109. It is generally viewed as the tendency on the part of individuals to present themselves in a favorable light, regardless of their true feelings about an issue or to
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經(jīng)權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
- 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 瑞安裝修合同范例
- 場地再次轉讓合同范例
- 碼頭改造租房合同范例
- 個人對個人合同范例
- 地產(chǎn)入股合同范例
- 安防服務終止合同范例
- 大型婚慶酒店轉讓合同范例
- 電動自行車購車合同范例
- 銅仁學院《數(shù)據(jù)可視化理論與實踐》2023-2024學年第一學期期末試卷
- 康復醫(yī)學治療技術(士)《專業(yè)知識》模考試題(含參考答案)
- ZN12-10真空斷路器系列概述
- 盧家宏《我心永恒MyHeartWillGoOn》指彈吉他譜
- 體檢中心建設標準
- 閥門的壓力試驗規(guī)范
- 鄭家坡鐵礦充填系統(tǒng)設計
- 2021江蘇學業(yè)水平測試生物試卷(含答案)
- 裝飾裝修工程完整投標文件.doc
- 汽車維修創(chuàng)業(yè)計劃書
- 直讀光譜儀測量低合金鋼中各元素含量的不確定度評定
- 江蘇省居住建筑熱環(huán)境和節(jié)能設計標準規(guī)范
- 學校發(fā)展性評價自查自評報告(定稿)
評論
0/150
提交評論