CTS Tools for Assessment
CTS Tools for Faculty and Student Assessment
A number of critical thinking skills inventories and measures have been developed:
Watson-Glaser Critical Thinking Appraisal (WGCTA)
Cornell Critical Thinking Test
California Critical Thinking Disposition Inventory (CCTDI)
California
Critical Thinking Skills Test (CCTST)
Health Science Reasoning Test (HSRT)
Professional Judgment Rating Form (PJRF)
Teaching for Thinking Student
Course Evaluation Form
Holistic Critical Thinking Scoring Rubric
Peer Evaluation of Group Presentation Form
Excluding the Watson-Glaser Critical Thinking Appraisal and the Cornell Critical Thinking Test, Facione and Facione developed the critical thinking skills instruments listed above. However, it is important to point out that all of these measures are of questionable utility for dental educators because their content is general rather than dental education specific. (See Critical Thinking and Assessment.)
Table 7. Purposes of Critical Thinking Skills Instruments
Test Name | Purpose |
Watson-Glaser Critical Thinking Appraisal- FS (WGCTA-FS) | Assesses participants' skills in five subscales: inference, recognition of assumptions, deduction, interpretation, and evaluation of arguments. |
Cornell Critical Thinking Test (CCTT) | Measures test takers' skills in induction, credibility, prediction and experimental planning, fallacies, and deduction. |
California Critical Thinking Disposition Inventory (CCTDI) | Assesses test takers' consistent internal motivations to engage in critical thinking skills. |
California Critical Thinking Skills Test (CCTST) | Provides objective measures of participants' skills in six subscales (analysis, inference, explanation, interpretation, self-regulation, and evaluation) and an overall score for critical thinking. |
The Health Science Reasoning Test (HSRT) | Assesses critical thinking skills of health science professionals and students. Measures analysis, evaluation, inference, and inductive and deductive reasoning. |
Professional Judgment Rating Form (PJRF) | Measures extent to which novices approach problems with CTS. Can be used to assess effectiveness of training programs for individual or group evaluation. |
Teaching for Thinking Student Course Evaluation Form | Used by students to rate the perceived critical thinking skills content in secondary and postsecondary classroom experiences. |
Holistic Critical Thinking Scoring Rubric | Used by professors and students to rate learning outcomes or presentations on critical thinking skills and dispositions. The rubric can capture the type of target behaviors, qualities, or products that professors are interested in evaluating. |
Peer Evaluation of Group Presentation Form | A common set of criteria used by peers and the instructor to evaluate student-led group presentations. |
Reliability and Validity
Reliability means that individual scores from an instrument should be the same or nearly the same from one administration of the instrument to another. The instrument can be assumed to be free of bias and measurement error (68). Alpha coefficients are often used to report an estimate of internal consistency. Scores of .70 or higher indicate that the instrument has high reliability when the stakes are moderate. Scores of .80 and higher are appropriate when the stakes are high.
Validity means that individual scores from a particular instrument are meaningful, make sense, and allow researchers to draw conclusions from the sample to the population that is being studied (69) Researchers often refer to "content" or "face" validity. Content validity or face validity is the extent to which questions on an instrument are representative of the possible questions that a researcher could ask about that particular content or skills.
Watson-Glaser Critical Thinking Appraisal-FS (WGCTA-FS)
The WGCTA-FS is a 40-item inventory created to replace Forms A and B of the original test, which participants reported was too long.70 This inventory assesses test takers' skills in:
(a) Inference: the extent to which the individual recognizes whether assumptions are clearly stated
(b) Recognition of assumptions: whether an individual recognizes whether assumptions are clearly stated
(c) Deduction: whether an individual decides if certain conclusions follow the information provided
(d) Interpretation: whether an individual considers evidence provided and determines whether generalizations from
data are warranted
(e) Evaluation of arguments: whether an individual distinguishes strong and relevant arguments from weak and irrelevant arguments
Researchers investigated the reliability and validity of the WGCTA-FS for subjects in academic fields. Participants included 586 university students. Internal consistencies for the total WGCTA-FS among students majoring in psychology, educational psychology, and special education, including undergraduates and graduates, ranged from .74 to .92. The correlations between course grades and total WGCTA-FS scores for all groups ranged from .24 to .62 and were significant at the p < .05 of p < .01. In addition, internal consistency and test-retest reliability for the WGCTA-FS have been measured as .81. The WGCTA-FS was found to be a reliable and valid instrument for measuring critical thinking (71).
Cornell Critical Thinking Test (CCTT)
There are two forms of the CCTT, X and Z. Form X is for students in grades 4-14. Form Z is for advanced and gifted high school students, undergraduate and graduate students, and adults. Reliability estimates for Form Z range from .49 to .87 across the 42 groups who have been tested. Measures of validity were computed in standard conditions, roughly defined as conditions that do not adversely affect test performance. Correlations between Level Z and other measures of critical thinking are about .50.72 The CCTT is reportedly as predictive of graduate school grades as the Graduate Record Exam (GRE), a measure of aptitude, and the Miller Analogies Test, and tends to correlate between .2 and .4.73
California Critical Thinking Disposition Inventory (CCTDI)
Facione and Facione have reported significant relationships between the CCTDI and the CCTST. When faculty focus on critical thinking in planning curriculum development, modest cross-sectional and longitudinal gains have been demonstrated in students' CTS.74 The CCTDI consists of seven subscales and an overall score. The recommended cut-off score for each scale is 40, the suggested target score is 50, and the maximum score is 60. Scores below 40 on a specific scale are weak in that CT disposition, and scores above 50 on a scale are strong in that dispositional aspect. An overall score of 280 shows serious deficiency in disposition toward CT, while an overall score of 350 (while rare) shows across the board strength. The seven subscales are analyticity, self-confidence, inquisitiveness, maturity, open-mindedness, systematicity, and truth seeking (75).
In a study of instructional strategies and their influence on the development of critical thinking among undergraduate nursing students, Tiwari, Lai, and Yuen found that, compared with lecture students, PBL students showed significantly greater improvement
in overall CCTDI (p = .0048), Truth seeking (p = .0008), Analyticity (p =.0368) and Critical Thinking Self-confidence (p =.0342) subscales from the first to the second time points; in overall CCTDI (p = .0083), Truth seeking (p= .0090), and Analyticity
(p =.0354) subscales from the second to the third time points; and in Truth seeking (p = .0173) and Systematicity (p = .0440) subscales scores from the first to the fourth time points (76).
California Critical Thinking Skills Test (CCTST)
Studies have shown the California Critical Thinking Skills Test captured gain scores in students' critical thinking over one quarter or one semester. Multiple health science programs have demonstrated significant gains in students' critical thinking using site-specific curriculum. Studies conducted to control for re-test bias showed no testing effect from pre- to post-test means using two independent groups of CT students. Since behavioral science measures can be impacted by social-desirability bias-the participant's desire to answer in ways that would please the researcher-researchers are urged to have participants take the Marlowe Crowne Social Desirability Scale simultaneously when measuring pre- and post-test changes in critical thinking skills. The CCTST is a 34-item instrument. This test has been correlated with the CCTDI with a sample of 1,557 nursing education students. Results show that, r = .201, and the relationship between the CCTST and the CCTDI is significant at p< .001. Significant relationships between CCTST and other measures including the GRE total, GRE-analytic, GRE-Verbal, GRE-Quantitative, the WGCTA, and the SAT Math and Verbal have also been reported. The two forms of the CCTST, A and B, are considered statistically significant. Depending on the testing, context KR-20 alphas range from .70 to .75. The newest version is CCTST Form 2000, and depending on the testing context, KR-20 alphas range from .78-.84.77
The Health Science Reasoning Test (HSRT)
Items within this inventory cover the domain of CT cognitive skills identified by a Delphi group of experts whose work resulted in the development of the CCTDI and CCTST. This test measures health science undergraduate and graduate students' CTS. Although test items are set in health sciences and clinical practice contexts, test takers are not required to have discipline-specific health sciences knowledge. For this reason, the test may have limited utility in dental education (78).
Preliminary estimates of internal consistency show that overall KR-20 coefficients range from .77 to .83.79 The instrument has moderate reliability on analysis and inference subscales, although the factor loadings appear adequate. The low K-20 coefficients may be result of small sample size, variance in item response, or both (see following table).
Table 8. Estimates of Internal Consistency and Factor Loading by Subscale for HSRT
Subscale | KR-20 | Factor loading |
Inductive | .76 | .332-.769 |
Deductive | .71 | .366-.579 |
Analysis | .54 | .369-.599 |
Inference | .52 | .300-.664 |
Evaluation | .77 | .359-.758 |
Professional Judgment Rating Form (PJRF)
The scale consists of two sets of descriptors. The first set relates primarily to the attitudinal (habits of mind) dimension of CT. The second set relates primarily to CTS.
A single rater should know the student well enough to respond to at least 17 or the 20 descriptors with confidence. If not, the validity of the ratings may be questionable. If a single rater is used and ratings over time show some consistency, comparisons between ratings may be used to assess changes. If more than one rater is used, then inter-rater reliability must be established among the raters to yield meaningful results. While the PJRF can be used to assess the effectiveness of training programs for individuals or groups, the evaluation of participants' actual skills are best measured by an objective tool such as the California Critical Thinking Skills Test.
Teaching for Thinking Student Course Evaluation Form
Course evaluations typically ask for responses of "agree" or "disagree" to items focusing on teacher behavior. Typically the questions do not solicit information about student learning. Because contemporary thinking about curriculum is interested in student learning, this form was developed to address differences in pedagogy and subject matter, learning outcomes, student demographics, and course level characteristic of education today. This form also grew out of a "one size fits all" approach to teaching evaluations and a recognition of the limitations of this practice. It offers information about how a particular course enhances student knowledge, sensitivities, and dispositions. The form gives students an opportunity to provide feedback that can be used to improve instruction.
Holistic Critical Thinking Scoring Rubric
This assessment tool uses a four-point classification schema that lists particular opposing reasoning skills for select criteria. One advantage of a rubric is that it offers clearly delineated components and scales for evaluating outcomes. This rubric explains how students' CTS will be evaluated, and it provides a consistent framework for the professor as evaluator. Users can add or delete any of the statements to reflect their institution's effort to measure CT. Like most rubrics, this form is likely to have high face validity since the items tend to be relevant or descriptive of the target concept. This rubric can be used to rate student work or to assess learning outcomes. Experienced evaluators should engage in a process leading to consensus regarding what kinds of things should be classified and in what ways.80 If used improperly or by inexperienced evaluators, unreliable results may occur.
Peer Evaluation of Group Presentation Form
This form offers a common set of criteria to be used by peers and the instructor to evaluate student-led group presentations regarding concepts, analysis of arguments or positions, and conclusions.81 Users have an opportunity to rate the degree to which each component was demonstrated. Open-ended questions give users an opportunity to cite examples of how concepts, the analysis of arguments or positions, and conclusions were demonstrated.
Table 8. Proposed Universal Criteria for Evaluating Students' Critical Thinking Skills
Accuracy |
Adequacy |
Clarity |
Completeness |
Consistency |
Depth |
Fairness |
Logic |
Precision |
Realism |
Relevance |
Significance |
Specificity |
Aside from the use of the above-mentioned assessment tools, Dexter et al. recommended that all schools develop universal criteria for evaluating students' development of critical thinking skills (82).
Their rationale for the proposed criteria is that if faculty give feedback using these criteria, graduates will internalize these skills and use them to monitor their own thinking and practice (see Table 4).