Malcolm Ree - Academia.edu (original) (raw)

Papers by Malcolm Ree

Research paper thumbnail of P020 A Path Analysis Model Examining Perception of Healthy Body Weight, Dieting Behaviors and Body Mass Index in Hispanic Families

Journal of Nutrition Education and Behavior

Research paper thumbnail of Eight issues in studying organizational constructs: Tackling a complex problem

International Journal of Selection and Assessment, 2018

Organizational research uses abstractions called constructs. The purpose of the current paper is ... more Organizational research uses abstractions called constructs. The purpose of the current paper is to consolidate much of what is known about the development and testing of constructs. Although much of the information presented is found in multiple journal articles and text books, we offer a condensed introductory compendium. Constructs are variables that are not directly observable and their creation is part of theory development. Theories must be testable to have value. There must be observable indicators of the constructs. This requires operationalization, which is the process of defining a construct to make it measureable, interpretable in terms of observation, and clearly distinguishable from other constructs (Messick, 1989, 1995). Tests, scales, and other measures capture constructs. Unfortunately, sometimes both the theoretical and operational definition of constructs varies among practitioners and researchers. Constructs provide common understanding through common language. When a construct means various things to different people, clarity of understanding is lost as is the ability to accumulate knowledge. This lack of clarity of communications makes interpretation of results difficult. Varying definitions lead to confusion about the existence of and relationships among constructs. For the practitioner and researcher these varying labels and definitions cause actions that are less than optimal and wasteful. This proliferation of labels of constructs leads to confusion about the number and nature of constructs. This paper describes best practices to avoid problems and confusion about organizational constructs. 2 | CON S TRUC T VALID IT Y Cronbach and Meehl (1955) proposed the establishment of nomological networks for constructs. The network should include a theoretical framework for a measure and an empirical framework to operationalize it. Tests of the empirical and theoretical relations among and between these two frameworks must be developed. Binning and Barrett (1989) unified the concepts of construct, content, and criterion-related validity. They portrayed validation as the process of gathering empirical and judgmental evidence (see Cronbach & Meehl, 1955) to support inferences linking psychological constructs and operational measures of them. Messick (1989) proposed a theory of construct validity with six components: consequential, content, substantive, structural, external, and generalizability. Consequential validity assesses risks if scores were invalid or misinterpreted. Content validity focuses on whether the content measures the construct. Substantive validity

Research paper thumbnail of Lack of Ability in Not Always the Reason for High Attrition

UU 18. NUMBER OF PAGES 8 19a. NAME OF RESPONSIBLE PERSON

Research paper thumbnail of Tests of Cognitive Ability

... The goal of job analysis (Cascio, 1991; Gael, 1988; McCormick, 1976, 1979) is to establish ..... more ... The goal of job analysis (Cascio, 1991; Gael, 1988; McCormick, 1976, 1979) is to establish ... administration procedures/instructions, trying out items, analyzing item-level data, assembling the test, and preparing test documentation (Childs, Baughman, & Keil, 1997). ...

Research paper thumbnail of Joint Analysis of Two Ability Tests: Two Theories, One Outcome

Using Government drawings, specifications, or other data included in this document for any purpos... more Using Government drawings, specifications, or other data included in this document for any purpose other than Government procurement does not in any way obligate the U.S. Government. The fact that the Government formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. Qualified requestors may obtain copies of this report from the Defense Technical Information Center (DTIC) (http://www.dtic.mil).

Research paper thumbnail of Joint Use of the MAB-II and MicroCog for Improvements in the Clinical and Neuropsychological Screening and Aeromedical Waiver Process of Rated USAF Pilots

Intelligence testing and neuropsychological screening is critical to USAF medical flight screenin... more Intelligence testing and neuropsychological screening is critical to USAF medical flight screening and aeromedical waiver procedures for pilots. A confirmatory factor analysis was conducted on the intelligence test scores from the Multidimensional Aptitude Battery (MAB-II) and neuropsychological screening from the five first-level indices of the MicroCog. The sample consisted of 10,612 USAF pilot applicants selected for training. Subtest correlations and measurement models were estimated for both tests. A joint factor analysis using the measurement models of the MAB-II and MicroCog showed positive correlations between factors identified for the two tests and between subtests measuring specific cognitive aptitudes. Similarities and differences among scores are discussed. Normative tables demonstrate cognitive aptitudes for USAF pilot applicants are in the high average to superior range when compared to the general population. The normative tables improve interpretation of scores for USAF pilots and an aeromedical clinical case vignette is provided to illustrate practical applications of the study. 15. SUBJECT TERMS Intelligence testing, neuropsychological screening, USAF pilot, medical flight screening, aeromedical evaluations 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 37 19a. NAME OF RESPONSIBLE PERSON Wayne Chappelle a. REPORT

Research paper thumbnail of Pilot Cognitive Functioning and Training Outcomes

The predictive validity of scores from two cognitive functioning tests, the Multidimensional Apti... more The predictive validity of scores from two cognitive functioning tests, the Multidimensional Aptitude Battery and MicroCog, was examined for initial pilot training outcomes. In addition to training completion, academic grades, daily flying grades, check ride grades, and class rank were available for graduates. Mean score comparisons and correlations in samples of between 5,582 and 12,924 trainees across the two tests showed small, but statistically significant, relationships with training performance. The results pointed to general cognitive ability as the main predictor of training performance. Comparisons with results from studies involving US Air Force pilot aptitude tests showed lower validities for these cognitive functioning tests. This finding likely occurred because the pilot aptitude tests measure additional factors (e.g., aviation knowledge/experience, psychomotor) that are predictive of training success, but not measured by these cognitive functioning tests, which were designed primarily to be used for clinical assessment. Measures of cognitive ability have been a mainstay in military pilot aptitude batteries since WWI (Carretta & Ree, 2003). Although the specific content and administration mode vary, cognitive ability has shown a consistent relation with pilot performance (Carretta & Ree, 2003; Hunter & Burke, 1995; Martinussen, 1996). More recently, Paullin, Katz, Bruskiewicz, Houston, and Damos (2006) conducted a comprehensive review of aviation testing and selection for the US Army that included both cognitive and personality tests. They recommended the US Army follow the lead of the US Navy and US Air Force in their use of selection tests and that they focus on measures of intelligence, cognitive ability, and information processing. Howse and Damos (2011) updated that work with a comprehensive, 275-page annotated bibliography published through the Air Force Personnel Center. These reviews and other studies (Olea & Ree, 1995; Ree & Carretta, 1996; Zierke, 2012) have shown intelligence and cognitive ability to be crucial to pilot training performance. Additional predictors include aviation knowledge/experience,

Research paper thumbnail of Development of statistically parallel tests by analysis of unique item variance

PsycEXTRA Dataset

ABSTRACT

Research paper thumbnail of Armed Services Vocational Aptitude Battery: Correcting the speeded subtests for the 1980 youth population

PsycEXTRA Dataset

In the late 1970s, the Department of Defense requested that the reference population for the Arme... more In the late 1970s, the Department of Defense requested that the reference population for the Armed Services Vocational Aptitude Battery (ASVAB) be changed and updated to reflect the current you, 1 population. Analyses of new data collected in 1980 indicated that speeded subtest scores of the new sample were atypically low and that the sample might therefore be inappropriate for use as a reference. A preliminary investigation traced the problem to a nonoperational answer sheet used for data collection of the 1980 youth sample. The present project was designed as a large-scale test of the differences between these nonoperational answer sheets and the operational answer sheets. Data were collected on the two speeded subtests from about 9,500 service applicants at Military Entrance Processing Stations; half used operational and half used nonoperational answer sheets that were employed in the youth sample. Scores from nonoperational answer sheets were then equated to scores from the operational answer sheets. Adjustments based on equatings between the operational and nonoperational answer sheets were found to resolve the observed anomalies in the 1980 sample. Results indicated that the 1980 sample of American youth could be used appropriately with these adjustments. (Author/LMO)

Research paper thumbnail of Enlistment Screening Test Forms 81a and 81b: Development and calibration

PsycEXTRA Dataset

Abstract : This study describes the development and norming of two parallel Enlistment Screening ... more Abstract : This study describes the development and norming of two parallel Enlistment Screening Test (EST) forms for use by military recruiters in predicting applicant success on Armed Services Vocational Aptitude Battery (ASVAB) selection composites. The EST was developed to reduce enlistment processing costs for transportation and boarding of service applicants. By administering this test at local recruiting stations, those applicants who would most likely meet service mental qualification standards could be identified and would be sent to centralized testing stations. Data on EST items were collected from a sample of applicants for military enlistment at geographically dispersed recruiting stations. Item analysis was used in item selection for operational forms. Descriptive statistics including correlations and frequency distributions of EST with ASVAB selection composites were computed. The EST was calibrated (or equated) to the Armed Forces Qualification Test (AFQT) through the method of equivalent percentiles. The ESTs appear to be highly reliable instruments, discriminating well throughout a range which includes major service selection cutoff points. The two EST forms appear parallel based on highly similar distributions of item difficulty and criterion correlation values. EST scores predict AFQT percentiles quite well (r = .83). In addition, EST content is similar to that of AFQT. (Author)

Research paper thumbnail of The g factor: the science of mental ability

Choice Reviews Online, 1998

Research paper thumbnail of Pilot Personality and Training Outcomes

This report is published in the interest of scientific and technical information exchange, and it... more This report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government's approval or disapproval of its ideas or findings.

Research paper thumbnail of Tool for Studying the Effects of Range Restriction in Correlation Coefficient Estimation

oM o. oo, ph repornng burden for this collecrtion of information is ntimated to average how or rl... more oM o. oo, ph repornng burden for this collecrtion of information is ntimated to average how or rloone. inducing the tien for rev=wMin 1nuiond. i in d% scmumf gatering rAnd maintaining the data needed. and completing and reviewing the collction Of information. Sand commenets reg1 e burden aetlmee or any other agCt-I dis colletion of informatio, ncluding iligetm(ni for reducrng thi burden, tO W 07nqto eadquae Services. 4frectoramte or Information Oer o and a:= 12 ISt 0ait Highwra. suite 0 Adlon. VA M.22024302. an d to0te Office Of Mtheaeet and Mudget. Paperwor k Reduction PiN)ec (0704,0 1W) WaIngtV., O 2OiS3.

Research paper thumbnail of Subtest and Composite Validity of ASVAB Forms 11, 12, and 13 for Technical Training Courses

The validity of the Armed Services Vocational Aptitude Battery (ASVAB) subtests and composites fo... more The validity of the Armed Services Vocational Aptitude Battery (ASVAB) subtests and composites for predicting f inal school grades In 150 technical schools was investigated. After correction for restriction of range and predictor unreliability, Paragraph Comprehension was found to be the most valid subtest (average r-.77) across aNt the schools. Within the traditional classification categories of Mechanical, klministrative, General, and Electronics (M, A. G, & E), Aritmetic Reasoning was found to be the most valid sublest after correction for range * restriction. Except for the Electronics composite, the specific composite (M, A, G, & E) used for classification was niot as valid as the Armed Forces Oualiflcation Test (AFOT) nor the sum of the four Air Force composites, both of which are measures of psychometric g-general cognitive ablity. The Administrative composite was less * valid under all circumstances than the three other composites, the AFOT, or the sum of M, A, G. & E. Bestweighted-regression-based composites were sightly more predictive than the sum of M. A, G, & E, but at the expense of penalizing good test performance through the use of negative weights. A selection and classification system based on either best-regression-weighted siMests or on the E composite and the AFOT would incres validit.

Research paper thumbnail of The NEO PI-R as a Premorbid Baseline Measure

This report is published in the interest of scientific and technical information exchange, and it... more This report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government's approval or disapproval of its ideas or findings.

Research paper thumbnail of Interchangeability of Verbal and Quantitative Scores for Personnel Selection: An Example

... personnel. Work Unit Monitor was Dr Thomas R. Carretta. We thank N. Dorans and H. Wainer for ... more ... personnel. Work Unit Monitor was Dr Thomas R. Carretta. We thank N. Dorans and H. Wainer for their help in this effort. ... composite. These were adjusted to the normative reliability using the procedure given by Gulliksen (1950). ...

Research paper thumbnail of Armed Services Vocational Aptitude Battery: Item and factor analyses of Forms 8, 9, and 10

PsycEXTRA Dataset

ABSTRACT

Research paper thumbnail of The Jackknife: Its Application to Test Equating

Abstract : Many tests used by the Armed Services are revised frequently to update content and to ... more Abstract : Many tests used by the Armed Services are revised frequently to update content and to reduce compromise. A major psychometric concern during revision is the necessity of deriving scores on the new test which are comparable to those on the old test. This score conversion permits the direct comparison of the scores of current examinees with those of past examinees and permits the retention of past decision score points because of consistency of meaning over time. (Author)

Research paper thumbnail of Consistency of Pilot Trainee Cognitive Ability, Personality, and Training Performance in Undergraduate Pilot Training

This report is published in the interest of scientific and technical information exchange, and it... more This report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government's approval or disapproval of its ideas or findings. i Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39-18 REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information.

Research paper thumbnail of Item Characteristic Curve Parameters: Effects of Sample Size on Linear Equating

Research paper thumbnail of P020 A Path Analysis Model Examining Perception of Healthy Body Weight, Dieting Behaviors and Body Mass Index in Hispanic Families

Journal of Nutrition Education and Behavior

Research paper thumbnail of Eight issues in studying organizational constructs: Tackling a complex problem

International Journal of Selection and Assessment, 2018

Organizational research uses abstractions called constructs. The purpose of the current paper is ... more Organizational research uses abstractions called constructs. The purpose of the current paper is to consolidate much of what is known about the development and testing of constructs. Although much of the information presented is found in multiple journal articles and text books, we offer a condensed introductory compendium. Constructs are variables that are not directly observable and their creation is part of theory development. Theories must be testable to have value. There must be observable indicators of the constructs. This requires operationalization, which is the process of defining a construct to make it measureable, interpretable in terms of observation, and clearly distinguishable from other constructs (Messick, 1989, 1995). Tests, scales, and other measures capture constructs. Unfortunately, sometimes both the theoretical and operational definition of constructs varies among practitioners and researchers. Constructs provide common understanding through common language. When a construct means various things to different people, clarity of understanding is lost as is the ability to accumulate knowledge. This lack of clarity of communications makes interpretation of results difficult. Varying definitions lead to confusion about the existence of and relationships among constructs. For the practitioner and researcher these varying labels and definitions cause actions that are less than optimal and wasteful. This proliferation of labels of constructs leads to confusion about the number and nature of constructs. This paper describes best practices to avoid problems and confusion about organizational constructs. 2 | CON S TRUC T VALID IT Y Cronbach and Meehl (1955) proposed the establishment of nomological networks for constructs. The network should include a theoretical framework for a measure and an empirical framework to operationalize it. Tests of the empirical and theoretical relations among and between these two frameworks must be developed. Binning and Barrett (1989) unified the concepts of construct, content, and criterion-related validity. They portrayed validation as the process of gathering empirical and judgmental evidence (see Cronbach & Meehl, 1955) to support inferences linking psychological constructs and operational measures of them. Messick (1989) proposed a theory of construct validity with six components: consequential, content, substantive, structural, external, and generalizability. Consequential validity assesses risks if scores were invalid or misinterpreted. Content validity focuses on whether the content measures the construct. Substantive validity

Research paper thumbnail of Lack of Ability in Not Always the Reason for High Attrition

UU 18. NUMBER OF PAGES 8 19a. NAME OF RESPONSIBLE PERSON

Research paper thumbnail of Tests of Cognitive Ability

... The goal of job analysis (Cascio, 1991; Gael, 1988; McCormick, 1976, 1979) is to establish ..... more ... The goal of job analysis (Cascio, 1991; Gael, 1988; McCormick, 1976, 1979) is to establish ... administration procedures/instructions, trying out items, analyzing item-level data, assembling the test, and preparing test documentation (Childs, Baughman, & Keil, 1997). ...

Research paper thumbnail of Joint Analysis of Two Ability Tests: Two Theories, One Outcome

Using Government drawings, specifications, or other data included in this document for any purpos... more Using Government drawings, specifications, or other data included in this document for any purpose other than Government procurement does not in any way obligate the U.S. Government. The fact that the Government formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. Qualified requestors may obtain copies of this report from the Defense Technical Information Center (DTIC) (http://www.dtic.mil).

Research paper thumbnail of Joint Use of the MAB-II and MicroCog for Improvements in the Clinical and Neuropsychological Screening and Aeromedical Waiver Process of Rated USAF Pilots

Intelligence testing and neuropsychological screening is critical to USAF medical flight screenin... more Intelligence testing and neuropsychological screening is critical to USAF medical flight screening and aeromedical waiver procedures for pilots. A confirmatory factor analysis was conducted on the intelligence test scores from the Multidimensional Aptitude Battery (MAB-II) and neuropsychological screening from the five first-level indices of the MicroCog. The sample consisted of 10,612 USAF pilot applicants selected for training. Subtest correlations and measurement models were estimated for both tests. A joint factor analysis using the measurement models of the MAB-II and MicroCog showed positive correlations between factors identified for the two tests and between subtests measuring specific cognitive aptitudes. Similarities and differences among scores are discussed. Normative tables demonstrate cognitive aptitudes for USAF pilot applicants are in the high average to superior range when compared to the general population. The normative tables improve interpretation of scores for USAF pilots and an aeromedical clinical case vignette is provided to illustrate practical applications of the study. 15. SUBJECT TERMS Intelligence testing, neuropsychological screening, USAF pilot, medical flight screening, aeromedical evaluations 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 37 19a. NAME OF RESPONSIBLE PERSON Wayne Chappelle a. REPORT

Research paper thumbnail of Pilot Cognitive Functioning and Training Outcomes

The predictive validity of scores from two cognitive functioning tests, the Multidimensional Apti... more The predictive validity of scores from two cognitive functioning tests, the Multidimensional Aptitude Battery and MicroCog, was examined for initial pilot training outcomes. In addition to training completion, academic grades, daily flying grades, check ride grades, and class rank were available for graduates. Mean score comparisons and correlations in samples of between 5,582 and 12,924 trainees across the two tests showed small, but statistically significant, relationships with training performance. The results pointed to general cognitive ability as the main predictor of training performance. Comparisons with results from studies involving US Air Force pilot aptitude tests showed lower validities for these cognitive functioning tests. This finding likely occurred because the pilot aptitude tests measure additional factors (e.g., aviation knowledge/experience, psychomotor) that are predictive of training success, but not measured by these cognitive functioning tests, which were designed primarily to be used for clinical assessment. Measures of cognitive ability have been a mainstay in military pilot aptitude batteries since WWI (Carretta & Ree, 2003). Although the specific content and administration mode vary, cognitive ability has shown a consistent relation with pilot performance (Carretta & Ree, 2003; Hunter & Burke, 1995; Martinussen, 1996). More recently, Paullin, Katz, Bruskiewicz, Houston, and Damos (2006) conducted a comprehensive review of aviation testing and selection for the US Army that included both cognitive and personality tests. They recommended the US Army follow the lead of the US Navy and US Air Force in their use of selection tests and that they focus on measures of intelligence, cognitive ability, and information processing. Howse and Damos (2011) updated that work with a comprehensive, 275-page annotated bibliography published through the Air Force Personnel Center. These reviews and other studies (Olea & Ree, 1995; Ree & Carretta, 1996; Zierke, 2012) have shown intelligence and cognitive ability to be crucial to pilot training performance. Additional predictors include aviation knowledge/experience,

Research paper thumbnail of Development of statistically parallel tests by analysis of unique item variance

PsycEXTRA Dataset

ABSTRACT

Research paper thumbnail of Armed Services Vocational Aptitude Battery: Correcting the speeded subtests for the 1980 youth population

PsycEXTRA Dataset

In the late 1970s, the Department of Defense requested that the reference population for the Arme... more In the late 1970s, the Department of Defense requested that the reference population for the Armed Services Vocational Aptitude Battery (ASVAB) be changed and updated to reflect the current you, 1 population. Analyses of new data collected in 1980 indicated that speeded subtest scores of the new sample were atypically low and that the sample might therefore be inappropriate for use as a reference. A preliminary investigation traced the problem to a nonoperational answer sheet used for data collection of the 1980 youth sample. The present project was designed as a large-scale test of the differences between these nonoperational answer sheets and the operational answer sheets. Data were collected on the two speeded subtests from about 9,500 service applicants at Military Entrance Processing Stations; half used operational and half used nonoperational answer sheets that were employed in the youth sample. Scores from nonoperational answer sheets were then equated to scores from the operational answer sheets. Adjustments based on equatings between the operational and nonoperational answer sheets were found to resolve the observed anomalies in the 1980 sample. Results indicated that the 1980 sample of American youth could be used appropriately with these adjustments. (Author/LMO)

Research paper thumbnail of Enlistment Screening Test Forms 81a and 81b: Development and calibration

PsycEXTRA Dataset

Abstract : This study describes the development and norming of two parallel Enlistment Screening ... more Abstract : This study describes the development and norming of two parallel Enlistment Screening Test (EST) forms for use by military recruiters in predicting applicant success on Armed Services Vocational Aptitude Battery (ASVAB) selection composites. The EST was developed to reduce enlistment processing costs for transportation and boarding of service applicants. By administering this test at local recruiting stations, those applicants who would most likely meet service mental qualification standards could be identified and would be sent to centralized testing stations. Data on EST items were collected from a sample of applicants for military enlistment at geographically dispersed recruiting stations. Item analysis was used in item selection for operational forms. Descriptive statistics including correlations and frequency distributions of EST with ASVAB selection composites were computed. The EST was calibrated (or equated) to the Armed Forces Qualification Test (AFQT) through the method of equivalent percentiles. The ESTs appear to be highly reliable instruments, discriminating well throughout a range which includes major service selection cutoff points. The two EST forms appear parallel based on highly similar distributions of item difficulty and criterion correlation values. EST scores predict AFQT percentiles quite well (r = .83). In addition, EST content is similar to that of AFQT. (Author)

Research paper thumbnail of The g factor: the science of mental ability

Choice Reviews Online, 1998

Research paper thumbnail of Pilot Personality and Training Outcomes

This report is published in the interest of scientific and technical information exchange, and it... more This report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government's approval or disapproval of its ideas or findings.

Research paper thumbnail of Tool for Studying the Effects of Range Restriction in Correlation Coefficient Estimation

oM o. oo, ph repornng burden for this collecrtion of information is ntimated to average how or rl... more oM o. oo, ph repornng burden for this collecrtion of information is ntimated to average how or rloone. inducing the tien for rev=wMin 1nuiond. i in d% scmumf gatering rAnd maintaining the data needed. and completing and reviewing the collction Of information. Sand commenets reg1 e burden aetlmee or any other agCt-I dis colletion of informatio, ncluding iligetm(ni for reducrng thi burden, tO W 07nqto eadquae Services. 4frectoramte or Information Oer o and a:= 12 ISt 0ait Highwra. suite 0 Adlon. VA M.22024302. an d to0te Office Of Mtheaeet and Mudget. Paperwor k Reduction PiN)ec (0704,0 1W) WaIngtV., O 2OiS3.

Research paper thumbnail of Subtest and Composite Validity of ASVAB Forms 11, 12, and 13 for Technical Training Courses

The validity of the Armed Services Vocational Aptitude Battery (ASVAB) subtests and composites fo... more The validity of the Armed Services Vocational Aptitude Battery (ASVAB) subtests and composites for predicting f inal school grades In 150 technical schools was investigated. After correction for restriction of range and predictor unreliability, Paragraph Comprehension was found to be the most valid subtest (average r-.77) across aNt the schools. Within the traditional classification categories of Mechanical, klministrative, General, and Electronics (M, A. G, & E), Aritmetic Reasoning was found to be the most valid sublest after correction for range * restriction. Except for the Electronics composite, the specific composite (M, A, G, & E) used for classification was niot as valid as the Armed Forces Oualiflcation Test (AFOT) nor the sum of the four Air Force composites, both of which are measures of psychometric g-general cognitive ablity. The Administrative composite was less * valid under all circumstances than the three other composites, the AFOT, or the sum of M, A, G. & E. Bestweighted-regression-based composites were sightly more predictive than the sum of M. A, G, & E, but at the expense of penalizing good test performance through the use of negative weights. A selection and classification system based on either best-regression-weighted siMests or on the E composite and the AFOT would incres validit.

Research paper thumbnail of The NEO PI-R as a Premorbid Baseline Measure

This report is published in the interest of scientific and technical information exchange, and it... more This report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government's approval or disapproval of its ideas or findings.

Research paper thumbnail of Interchangeability of Verbal and Quantitative Scores for Personnel Selection: An Example

... personnel. Work Unit Monitor was Dr Thomas R. Carretta. We thank N. Dorans and H. Wainer for ... more ... personnel. Work Unit Monitor was Dr Thomas R. Carretta. We thank N. Dorans and H. Wainer for their help in this effort. ... composite. These were adjusted to the normative reliability using the procedure given by Gulliksen (1950). ...

Research paper thumbnail of Armed Services Vocational Aptitude Battery: Item and factor analyses of Forms 8, 9, and 10

PsycEXTRA Dataset

ABSTRACT

Research paper thumbnail of The Jackknife: Its Application to Test Equating

Abstract : Many tests used by the Armed Services are revised frequently to update content and to ... more Abstract : Many tests used by the Armed Services are revised frequently to update content and to reduce compromise. A major psychometric concern during revision is the necessity of deriving scores on the new test which are comparable to those on the old test. This score conversion permits the direct comparison of the scores of current examinees with those of past examinees and permits the retention of past decision score points because of consistency of meaning over time. (Author)

Research paper thumbnail of Consistency of Pilot Trainee Cognitive Ability, Personality, and Training Performance in Undergraduate Pilot Training

This report is published in the interest of scientific and technical information exchange, and it... more This report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government's approval or disapproval of its ideas or findings. i Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39-18 REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information.

Research paper thumbnail of Item Characteristic Curve Parameters: Effects of Sample Size on Linear Equating