What are the benefits of using this selection method? 

Custom Writing Services by World Class PhD Writers: High Quality Papers from Professional Writers

Best custom writing service you can rely on:

☝Cheap essays, research papers, dissertations.

✓14 Days Money Back Guerantee

✓100% Plagiarism FREE.

✓ 4-Hour Delivery

✓ Free bibliography page

✓ Free outline

✓ 200+ Certified ENL and ESL writers

✓  Original, fully referenced and formatted writing

What are the benefits of using this selection method?
Sample Answer for What are the benefits of using this selection method? Included After Question
What are the benefits of using this selection method? 
Question Description

I’m working on a psychology discussion question and need an explanation and answer to help me learn.
What are the benefits of using this selection method?
How would you summarize some best practices for using this method in selection?

What controversy (if any) or challenges are associated with this selection method?
How would you feel, if you were administered this assessment method for a job and you were rejected based on this method alone, particularly if it was asking about something you did in your distant past? Would this change your view about the value of the assessment? If so, how?

Picardi, C. A. (2020). Recruitment and selection: Strategies for workforce planning & assessment. SAGE Publications, Inc. (US).
Roulin, N., Bangerter, A., & Levashina, J. (2015). Honest and deceptive impression management in the employment interview: Can it be detected and how does it impact evaluations? Personnel Psychology, 68(2), 395-444
What are the benefits of using this selection method?
A Sample Answer For the Assignment: What are the benefits of using this selection method? 
Title: What are the benefits of using this selection method? 
Journal of Applied Psychology 2012, Vol. 97, No. 3, 499 –530 © 2012 American Psychological Association 0021-9010/12/$12.00 DOI: 10.1037/a0021196 The Criterion-Related Validity of Integrity Tests: An Updated Meta-Analysis Chad H. Van Iddekinge Philip L. Roth and Patrick H. Raymark Florida State University Clemson University Heather N. Odle-Dusseau This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Gettysburg College Integrity tests have become a prominent predictor within the selection literature over the past few decades. However, some researchers have expressed concerns about the criterion-related validity evidence for such tests because of a perceived lack of methodological rigor within this literature, as well as a heavy reliance on unpublished data from test publishers. In response to these concerns, we metaanalyzed 104 studies (representing 134 independent samples), which were authored by a similar proportion of test publishers and non-publishers, whose conduct was consistent with professional standards for test validation, and whose results were relevant to the validity of integrity-specific scales for predicting individual work behavior. Overall mean observed validity estimates and validity estimates corrected for unreliability in the criterion (respectively) were .12 and .15 for job performance, .13 and .16 for training performance, .26 and .32 for counterproductive work behavior, and .07 and .09 for turnover. Although data on restriction of range were sparse, illustrative corrections for indirect range restriction did increase validities slightly (e.g., from .15 to .18 for job performance). Several variables appeared to moderate relations between integrity tests and the criteria. For example, corrected validities for job performance criteria were larger when based on studies authored by integrity test publishers (.27) than when based on studies from non-publishers (.12). In addition, corrected validities for counterproductive work behavior criteria were larger when based on self-reports (.42) than when based on other-reports (.11) or employee records (.15). Keywords: integrity, honesty, personnel selection, test validity, counterproductive work behavior In recent years, integrity tests have become a prominent predictor within the selection literature. Use of such tests is thought to offer several advantages for selection, including criterion-related validity for predicting a variety of criteria (Ones, Viswesvaran, & Schmidt, 1993) and small subgroup differences (Ones & Viswesvaran, 1998). Researchers also have estimated that across a range of selection procedures, integrity tests may provide the largest amount of incremental validity beyond cognitive ability tests (Schmidt & Hunter, 1998). Furthermore, relative to some types of selection procedures (e.g., structured interviews, work sample tests), integrity tests tend to be cost effective and easy to administer and score. Several meta-analyses and quantitative-oriented reviews have provided the foundation for the generally favorable view of the criterion-related validity of integrity tests (e.g., J. Hogan & Hogan, 1989; Inwald, Hurwitz, & Kaufman, 1991; Kpo, 1984; McDaniel & Jones, 1988; Ones et al., 1993). Ones et al. (1993) conducted the most thorough and comprehensive review of the literature. Their meta-analysis revealed correlations (corrected for predictor range restriction and criterion unreliability) of .34 and .47 between integrity tests and measures of job performance and counterproductive work behavior (CWB), respectively. These researchers also found support for several moderators of integrity test validity. For instance, validity estimates for job performance criteria were somewhat larger in applicant samples than in incumbent samples. Chad H. Van Iddekinge, College of Business, Florida State University; Philip L. Roth, Department of Management, Clemson University; Patrick H. Raymark, Department of Psychology, Clemson University; Heather N. Odle-Dusseau, Department of Management, Gettysburg College. An earlier version of this article was presented at the 70th Annual Meeting of the Academy of Management, Montreal, Quebec, Canada, August 2010. We gratefully acknowledge the many researchers and test publishers who provided unpublished primary studies for possible inclusion in this meta-analysis. This study would not have been possible without their assistance. We are particularly grateful to Linda Goldinger (Creative Learning, Atlanta, Georgia), Matt Lemming and Jeff Foster (Hogan Assessment Systems, Tulsa, Oklahoma), and Kathy Tuzinski and Mike Fetzer (PreVisor, Minneapolis, Minnesota) for helping us locate some of the older unpublished work in this area, and to Saul Fine (Midot, Israel) and Bernd Marcus (University of Hagen, Germany) for providing unpublished data on some newer integrity tests. Finally, we thank Huy Le for his guidance concerning several technical issues, and Mike McDaniel for his helpful comments on an earlier version of the article. Correspondence concerning this article should be addressed to Chad H. Van Iddekinge, College of Business, Florida State University, 821 Academic Way, P.O. Box 3061110, Tallahassee, FL 32306-1110. E-mail: cvanidde@fsu.edu 499 This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 500 VAN IDDEKINGE, ROTH, RAYMARK, AND ODLE-DUSSEAU Several variables also appeared to moderate relations between integrity tests and CWB criteria, such that validity estimates were larger for overt tests, incumbent samples, concurrent designs, self-reported deviance, theft-related criteria, and high-complexity jobs. The work of Ones et al. is highly impressive in both scope and sophistication. Despite these positive results, some researchers have been concerned that the majority of validity evidence for integrity tests comes from unpublished studies conducted by the firms who develop and market the tests (e.g., Camara & Schneider, 1994, 1995; Dalton & Metzger, 1993; Karren & Zacharias, 2007; Lilienfeld, 1993; McDaniel, Rothstein, & Whetzel, 2006; Morgeson et al., 2007; Sackett & Wanek, 1996). For example, conclusions from several reviews of particular integrity tests (e.g., J. Hogan & Hogan, 1989; Inwald et al., 1991), or of the broader integrity literature (e.g., Sackett, Burris, & Callahan, 1989), have been based primarily or solely on test-publisher-sponsored research. The same holds for meta-analytic investigations of integrity test criterion-related validity. For instance, only 10% of the studies in Ones et al.’s (1993) meta-analysis were published in professional journals (p. 696), and all the studies cumulated by McDaniel and Jones (1988) were authored by test publishers. This situation has led to two main concerns. First, questions have been raised about the methodological quality of some of this unpublished test publisher research. For instance, during the 1980s, when there was great interest in the integrity test industry to publish its work, very few studies submitted for publication at leading journals were accepted because of their poor quality (Morgeson et al., 2007). Various methodological issues have been noted about these studies (e.g., Lilienfeld, 1993; McDaniel & Jones, 1988; Sackett et al., 1989), including an overreliance on self-report criterion measures, selective reporting of statistically significant results, and potentially problematic sampling techniques (e.g., use of “extreme groups”). Such issues have prompted some researchers to note that “gathering all of these low quality unpublished studies and conducting a meta-analysis does not erase their limitations. We have simply summarized a lot of low quality studies” (Morgeson et al., 2007, p. 707). The second concern is that test publishers have a vested interest in the validity of their tests. As Michael Campion noted, “my concern is not the ‘file drawer’ problem (i.e., studies that are written but never published). I believe that non-supportive results were never even documented” (Morgeson et al., 2007, p. 707). Karren and Zacharias (2007) reached a similar conclusion in their review of the integrity literature, stating that “since it is in the self-interest of the test publishers not to provide negative evidence against their own tests, it is likely that the reported coefficients are an overestimate of the tests’ validity” (p. 223). Concerns over test-publisher-authored research in the integrity test literature resemble concerns over research conducted by forprofit organizations in the medical research literature. The main concern in this literature has been conflicts of interest that may occur when for-profit organizations (e.g., drug companies) conduct studies to test the efficacy of the drugs, treatments, or surgical techniques they produce. Several recent meta-analyses have addressed whether for-profit and non-profit studies produce different results (e.g., Bekelman, Li, & Gross, 2003; Bhandari et al., 2004; Kjaergard & Als-Nielsen, 2002; Ridker & Torres, 2006; Wahlbeck & Adams, 1999). The findings of this work consistently suggest that studies funded or conducted by for-profit organizations tend to report more favorable results than do studies funded or conducted by non-profit organizations (e.g., government agencies). Research of this type also may provide insights regarding validity evidence reported by researchers with and without vested interests in integrity tests. Present Study The aim of the current study was to reconsider the criterionrelated validity of integrity tests, which we did in three main ways. First, questions have been raised about the lack of methodological rigor within the integrity test literature. This is of particular concern because several of the noted methods issues are likely to result in inflated estimates of validity. These include design features, such as contrasted groups and extreme groups, and data analysis features, such as stepwise multiple regression and the reporting of statistically significant results only. We address these issues by carefully reviewing each primary study and then metaanalyzing only studies whose design, conduct, and analyses are consistent with professional standards for test validation (e.g., Society for Industrial and Organizational Psychology [SIOP], 2003). This approach is in line with calls for meta-analysts to devote greater thought to the primary studies included in their research (e.g., Berry, Sackett, & Landers, 2007; Bobko & StoneRomero, 1998). Second, the results of prior meta-analyses primarily are based on test-publisher research, and there are unanswered questions concerning potential conflicts of interest and the comparability of publisher and non-publisher research results (Sackett & Wanek, 1996). However, such concerns largely are based on anecdotal evidence rather than on empirical data. We address this issue by examining whether author affiliation (i.e., test publishers vs. nonpublishers) moderates the validity of integrity tests. Finally, almost 20 years have passed since Ones et al.’s (1993) comprehensive meta-analysis. We do not attempt to replicate this or other previous reviews, but rather to examine the validity evidence for integrity tests from a different perspective. For example, whereas prior reviews have incorporated primary studies that used a wide variety of samples, designs, and variables, our results are based on studies that met a somewhat more focused set of inclusion criteria (which we describe in the Method section). Further, in addition to job performance and CWB, we investigate relations between integrity tests and two criteria that to our knowledge have not yet been cumulated individually: training performance and turnover. We also investigate the potential role of several previously unexplored moderators, including author affiliation (i.e., test publishers vs. non-publishers), type of job performance (i.e., task vs. contextual performance), and type of turnover (i.e., voluntary vs. involuntary). Finally, we incorporate results of integrity test research that has been conducted since the early 1990s. We believe the results of the present research have important implications for research and practice. From a practice perspective, practitioners may use meta-analytic findings to guide their decisions about which selection procedures—among the wide variety of procedures that exist—to use or to recommend to managers and clients. Accurate meta-analytic evidence may be particularly important for practitioners who are unable to conduct local validation This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. CRITERION-RELATED VALIDITY OF INTEGRITY TESTS studies (e.g., due to limited resources, small sample jobs, or lack of good criterion measures) and, thus, may rely more heavily on cumulative research to identify, and help justify the use of, selection procedures than practitioners who do not have such constraints. For instance, if meta-analytic evidence suggests a selection procedure has lower criterion-related validity than actually is the case, then practitioners may neglect a procedure that could be effective and, in turn, end up with a less optimal selection system (Schmidt, Hunter, McKenzie, & Muldrow, 1979). On the other hand, if meta-analytic evidence suggests a selection procedure has higher criterion-related validity than actually is the case, this could lead practitioners to incorporate the procedure into their selection systems. This, in turn, could diminish the organization’s ability to identify high-potential employees and possibly jeopardize the defensibility of decisions made on the basis of the selection process. Professional organizations devoted to personnel selection and human resources management also use meta-analytic findings as a basis for the assessment and selection information they provide their membership and the general public. For example, materials from organizations such as SIOP and the U.S. Office of Personnel Management (OPM) describe various selection procedures with respect to factors such as validity, subgroup differences, applicant reactions, and cost. Both SIOP and OPM indicate criterion-related validity as a key benefit of integrity tests. For instance, OPM’s Personnel Assessment and Selection Resource Center website states that “integrity tests have been shown to be valid predictors of overall job performance as well as many counterproductive behaviors . . . The use of integrity tests in combination with cognitive ability can substantially enhance the prediction of overall job performance” (http://apps.opm.gov/ADT). Meta-analytic evidence also can play an important role in legal cases involving employee selection and promotion. For instance, in addition to the use of meta-analyses to identify and defend the use of the selection procedures, expert witnesses may rely heavily on metaanalytic findings when testifying about what is known from the scientific literature concerning a particular selection procedure. Lastly, a clear understanding of integrity test validity has implications for selection research. For one, results of meta-analyses can influence the direction of future primary studies in a particular area. As McDaniel et al. (2006, p. 947) noted, “meta-analytic studies have a substantial impact as judged by citation rates, and researchers and practitioners often rely on meta-analytic results as the final word on research questions”; meta-analysis may “suppress new research in an area if there is a perception that the meta-analysis has largely settled all the research questions.” Metaanalysis also can highlight issues that remain unresolved and thereby influence the agenda for future research. Second, meta-analytic values frequently are used as input for other studies. For example, criterion-related validity estimates from integrity meta-analyses (e.g., Ones et al., 1993) have been used in metaanalytic correlation matrices to estimate incremental validity beyond cognitive ability tests (e.g., Schmidt & Hunter, 1998) and in simulation studies to examine the predicted performance or adverse impact associated with different selection procedures (e.g., Finch, Edwards, & Wallace, 2009). Thus, the validity of inferences drawn from the results of such studies hinges, in part, on the accuracy of meta-analytic values that serve as input for analysis. In sum, results of the present meta-analysis address questions and concerns about integrity tests that have been debated for years, 501 but until now have not been systematically investigated. This study also incorporates the results of almost 20 years of additional integrity test data that have not been cumulated. We believe the end result is a better understanding of integrity test validity, which is vital to both practitioners and researchers involved in personnel selection. Before we describe the method of our study, we discuss the basis for the potential moderator variables we examine. Potential Moderators of Integrity Test Validity Type of Integrity Test The first potential moderator we examine is type of integrity test. Integrity tests can be either overt or personality-based (Sackett et al., 1989). Overt or “clear-purpose” tests ask respondents directly about integrity-related attitudes and past dishonest behaviors. Conversely, personality-based or “disguised-purpose” tests are designed to measure a broader range of constructs thought to be precursors of dishonesty, including social conformity, impulse control, risk-taking, and trouble with authority (Wanek, Sackett, & Ones, 2003). Two theoretical perspectives provide a basis for expecting test type to moderate relations between test scores and CWB criteria. According to the theory of planned action (Ajzen, 1991; Ajzen & Fishbein, 2005), the most immediate precursor of behavior is one’s intentions to engage in the behavior. This theory also specifies three main determinants of intentions: attitudes toward the behavior, subjective norms regarding the behavior, and perceived control over engaging in the behavior. The second perspective is the theory of behavioral consistency (Wernimont & Campbell, 1968), which is based on the premise that past behavior is a good predictor of future behavior. More specifically, the more a predictor measure samples behaviors that are reflected in the criterion measure, the stronger the relationship between the two measures should be. Most overt integrity tests focus on measuring attitudes, intentions, and past behaviors related to dishonesty. For example, such tests ask respondents to indicate their views about dishonesty, such as their acceptance of common rationalizations for dishonest behavior (i.e., attitudes), their perceptions regarding the ease of behaviors such as theft (i.e., perceived control), and their beliefs about the prevalence of dishonesty (i.e., subjective norms) and how wrongdoers should be punished (Wanek et al., 2003). Further, many overt tests ask respondents to report past dishonest behaviors, such as overcharging customers and stealing cash or merchandise (i.e., behavior consistency). Thus, on the basis of the theories of planned action and behavioral consistency, people who have more positive attitudes about dishonesty, who believe that most people are somewhat dishonest, and who have engaged in dishonest behaviors in the past, should be more likely to behave dishonestly in the future. In contrast, personality-based integrity tests primarily focus on personality-related traits, such as social conformity and risktaking. Although potentially relevant to CWB, such traits are more distal to actual behavior than are the attitudes, intentions, and behaviors on which overt tests tend to focus. This leads to our first hypothesis: Hypothesis 1: There will be a stronger relationship between overt integrity tests and CWB than between personality-based integrity tests and CWB. VAN IDDEKINGE, ROTH, RAYMARK, AND ODLE-DUSSEAU This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 502 We also investigate whether test type moderates relations between integrity tests and measures of job performance.1 Scores on overt tests may relate to performance because supervisors and peers consider CWB (which such tests were designed to predict) when forming an overall evaluation of an employee’s performance (Rotundo & Sackett, 2002). Scores on personality-based tests may relate to performance because some of the traits these tests measure are relevant to performance in certain types of jobs. For example, some personality-based tests assess elements of conscientiousness, such as rule abidance, orderliness, and achievement orientation (Wanek et al., 2003). However, we are not aware of a compelling theoretical basis to predict that either type of test will be strongly related to job performance (particularly to task-related performance), or to predict that one test will be a better predictor of performance than will the other. Thus, we explore test type as a potential moderator of validity with respect to performance criteria. Research Question 1: Does type of integrity test (overt vs. personality-based) moderate relations between test scores and job performance? Study Design and Sample The next two potential moderators we examine are study design (i.e., predictive vs. concurrent) and study sample (i.e., applicants vs. incumbents), which typically are concomitant within the selection literature (Van Iddekinge & Ployhart, 2008). We expect to find higher validity estimates in concurrent designs than in predictive designs because in concurrent studies, respondents complete an integrity test and a self-report CWB measure at the same time. As a result, relations between scores on the two measures are susceptible to common method factors, such as transient mood state and measurement context effects (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). In contrast, predictive designs are less susceptible to such influences, because completion of the integrity test and CWB measure are separated by time, context, and so forth. Another reason why we expect to find larger validity estimates in concurrent designs concerns the potential for the predictor and criterion in these studies to assess the same behavioral events. For example, many integrity tests (particularly overt tests but also some personality-based tests) ask respondents to report dishonest or counterproductive behaviors they have displayed recently at work. In a concurrent design, participants are then asked to complete a self-report measure of work-related CWB. Thus, the two measures may ask the respondent about the same types of behaviors but using different questions. In fact, some have suggested that correlations between overt integrity tests and self-reported CWB are more like alternate form or test–retest reliability estimates than like criterion-related validity estimates (e.g., Morgeson et al., 2007; Sackett & Wanek, 1996). This same logic also may apply to other criteria used to validate integrity tests, such as employee records of CWB and ratings of job performance. If an integrity test asks respondents to report CWB they recently demonstrated, and then test scores are related to employee records that reflect the same instances of this CWB (e.g., of theft, absenteeism, insubordination), then relations between test scores and employee records may be stronger than if the two measures were separated in time (and thus assessed different instances of behavior). Similarly, supervisors may be asked to evaluate employees’ performance over the past 6 months or a year, and although these ratings may focus primarily on productive behaviors, they may (explicitly or implicitly) capture counterproductive behaviors as well. This, in turn, may result in stronger relations between integrity test scores and performance ratings than if test scores reflected employees’ pre-hire attitudes and behaviors. Hypothesis 2: Criterion-related validity estimates for integrity tests will be larger in concurrent designs than in predictive designs. We also expect to find higher validity estimates in incumbent samples than in applicant samples. Although the debate continues concerning the prevalence and effects of applicant response distortion on personality-oriented selection procedures (e.g., Morgeson et al., 2007; Ones, Dilchert, Viswesvaran, & Judge, 2007; Tett & Christiansen, 2007), meta-analytic research suggests that integrity tests, particularly overt tests, are susceptible to faking and coaching (e.g., Alliger & Dwight, 2000). Thus, to the extent that faking is more prevalent among applicants than among incumbents, lower criterion-related validities may be found in applicant samples. Finally, a finding of stronger validity evidence for concurrent designs and incumbent samples would be consistent with the results of primary and meta-analytic studies that have examined the moderating effects of validation design or sample on other selection procedures, including personality tests (e.g., Hough, 1998), biodata inventories (e.g., Harold, McFarland, & Weekley, 2006), situational judgment tests (e.g., McDaniel, Morgeson, Finnegan, Campion, & Braverman, 2001), and employment interviews (e.g., Huffcutt, Conway, Roth, & Klehe, 2004). Hypothesis 3: Criterion-related validity estimates for integrity tests will be larger in incumbent samples than in applicant samples. Performance Construct In recent years, researchers have devoted increased attention to understanding the criteria used to validate selection procedures. One important trend in this area concerns the identification and testing of multidimensional models of job performance (Campbell, McCloy, Oppler, & Sager, 1993). One model that has received support partitions the performance domain into three broad dimensions: task performance, contextual or citizenship performance, and counterproductive performance or CWB (e.g., Rotundo & Sackett, 2002).2 Task performance involves behaviors that are a formal part of one’s job and that contribute directly to the products or services an organization provides. Contextual performance involves behaviors that sup1 As we discuss later, CWB can be considered an aspect of job performance (e.g., Rotundo & Sackett, 2002). However, we use job performance to refer to “productive” performance behaviors (i.e., task and contextual behaviors) and CWB to refer to counterproductive behaviors. 2 Some models also include adaptive performance, which concerns the proficiency with which individuals alter their behavior to meet the demands of the work environment (Pulakos, Arad, Donovan, & Plamondon, 2000). However, relations between integrity tests and adaptive performance have not been widely examined, and thus we do not consider this performance construct here. This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. CRITERION-RELATED VALIDITY OF INTEGRITY TESTS port the organizational, social, and psychological context in which task behaviors are performed. Examples of citizenship behaviors include volunteering to complete tasks not formally part of one’s job, persisting with extra effort and enthusiasm, helping and cooperating with coworkers, following company rules and procedures, and supporting and defending the organization (Borman & Motowidlo, 1993). Finally, counterproductive performance (i.e., CWB) reflects voluntary actions that violate organizational norms and threaten the well-being of the organization and/or its members (Robinson & Bennett, 1995; Sackett & Devore, 2001). Researchers have identified various types of CWB, including theft, property destruction, unsafe behavior, poor attendance, and intentional poor performance. We expect both overt and personality-based tests will relate more strongly to CWB than to productive work behaviors that reflect task or contextual performance. Integrity tests primarily are designed to predict CWB, and as we noted, some integrity tests and CWB measures even include the same or highly similar items concerning past or current CWB. We also note that researchers have tended to measure CWB using selfreports, whereas productive work behaviors often are measured using supervisor or peer ratings. Thus, common method variance also may contribute to stronger relations between integrity tests and CWB than between integrity tests and productive work behaviors. Hypothesis 4: Criterion-related validity estimates for integrity tests will be larger for CWB than for productive work behaviors that reflect task and contextual performance. We also explore whether integrity tests relate differently to task performance versus contextual performance. A common belief among researchers is that ability-related constructs (e.g., cognitive ability) tend to be better predictors of task performance, whereas personality-related constructs (e.g., conscientiousness) tend to be better predictors of contextual performance (e.g., Hattrup, O’Connell, & Wingate, 1998; LePine & Van Dyne, 2001; Van Scotter & Motowidlo, 1996). If true, then integrity tests—which are thought to capture personality traits, such as conscientiousness, emotional stability, and agreeableness (Ones & Viswesvaran, 2001)—may demonstrate stronger relations with contextual performance than with task performance. However, some studies have found that personality constructs do not demonstrate notably stronger relationships with contextual behaviors than with task behaviors (e.g., Allworth & Hesketh, 1999; Hurtz & Donovan, 2000; Johnson, 2001). One possible contributing factor to this finding is that measures of task and contextual performance tend to be highly correlated (e.g., Hoffman, Blair, Meriac, & Woehr, 2007), which may make it difficult to detect differential relations between predictors and these two types of performance. Thus, although a theoretical rationale exists to expect that integrity tests will relate more strongly to contextual performance than to task performance, we might not necessarily find strong empirical support for this proposition. Research Question 2: Does job performance construct (task performance vs. contextual performance) moderate the criterion-related validity of integrity tests? 503 Breadth and Source of CWB Criteria Researchers have used various types of CWB measures to validate integrity tests. One factor that differentiates CWB measures is the “breadth” of their content. Some measures are broad in scope and assess multiple types of

error: Not Allowed