Cynthia L. Bartlett, Ph.D.
To follow the conceptual framework and definitions that Brooke has provided, I would like to discuss some "How To" examples in speech language pathology; specifically, I have examples of formative outcomes in two domains: cognitive and performative. I will address the performative outcomes in somewhat more detail and will address cognitive domain first.
Formative outcomes in the cognitive domain relate to what we do in the classroom and to the objectives that we establish for our courses. Although we may not have been brought up using this terminology, we routinely assess studentsí cognitive outcomes in a summative way through course exams and through written comprehensive exams, capstone projects or the PRAXIS exam at the end of the master's degree program. While these are valuable as they indicate where students have ended up, these summative measures provide no real information about how they got there or if they could have gotten there in a better way. That is where the formative measures are useful. Used over the course of a term, formative assessments in this area are valuable for identifying the degree and nature of what could be called students' learning in progress.
What follows is a quick overview of some formative assessments in the cognitive domain. These procedures are often called Classroom Assessment Techniques or CATs. In their practical guide to CATs, Angleo and Cross (1993) describe 50 such techniques that range from ways to assess students' prior knowledge, recall, and understanding; to their skill in analysis and critical thinking; synthesis, and creative thinking; and their ability to problem solve in the area covered by the class. We can assess their awareness of their own attitudes and values, their reactions to the instructor and his or her teaching, and the like. Depending upon the nature of the course and the aspect of the course content about which we are interested in getting feedback, we could use one or more of the variety of techniques in these areas to ascertain whether or not we've succeeded in leading our students in the direction they need to be going in order to end up where they need to be. Thus, many of the formative outcomes in the cognitive domain relate to course content, values, learning styles, and reactions to the instruction and the material and can be evaluated through such tools as the classroom assessment techniques.
Now I'd like to go into a little more detail with respect to the domain of performative outcomes.
Rationale: It seems unlikely that anyone would disagree that helping our students acquire clinical skills is important when we consider the central role that a certain level of mastery of these abilities plays in the overall preparation of our master's program students and also given that these skills are essential to our graduates eventual entry into and success in the professional workplace. That is reason enough to engage in assessing students' clinical skills; we all do that.
However, when we consider what our students have to gain command of both academically and clinically in a variety of areas and contexts in maybe only two years of graduate study, it is clear that the learning curve of necessity is steep from the beginning and has to remain that way throughout their time in our programs. This suggests a need to examine very carefully how we go about teaching the myriad clinical skills with which our students need to become proficient. One way to accomplish this is to use a formative assessment tool that is developmental, multidimensional, and is used throughout their clinical education. I would like to describe one example of such a tool. (See Appendix A.) It is one that our clinical faculty has developed over a number of years by adapting several other tools that are used in a variety of college and university programs.
Example: Before going into the details of how our clinical faculty uses this tool, I should explain why I have described it as developmental, multidimensional, and meaningful.
The tool is developmental because clinical supervisors evaluate students clinical skill in relation to the extent of the clinical experience that they bring to each practicum setting. Experience here is defined in terms of the number of clinical clock hours they have accrued at the outset of each placement. (See the elaboration on this below.)
It is a multidimensional tool because it allows evaluation in more than one important domain. Specifically, the two domains evaluated are the interpersonal and professional/technical domains. Further, associated with each of these domains are one or more behavioral objectives with explicit operational definitions.
The tool is formative because it is (1) used at least two points during a student's semester in any given placement site; (2) it may be used across the variety of placements in which each student participates; and (3) the feedback that is provided in supervisor student meetings during which the tool is reviewed is used to help shape students' clinical interactions. A review like this accomplishes this shaping by reminding the students in specific operational terms of what the important aspects of clinical interactions are and of how they are performing on each of these aspects.
The tool is meaningful in the ways that Hallowell and Lund (1998) described. Namely, (1) the measure relates to specific aspects of our master's program mission; (2) it is meant to improve the effectiveness of students' clinical training; (3) it involves clinical faculty and students; and (4) results of the tool are used in considering modifications in the program.
Let me show you how it works. In using this clinical evaluation tool, first the supervisor identifies a student clinician's level. The criterion for each of these levels is as follows:
Level 1 student clinicians bring fewer than 50 clinical
clock hours to the placement;
Level 2 students have accrued from 50 to 149 hours;
Level 3 students have from 150 to 250 hours;
Level 4 students have more than 250 clock hours at the beginning of a placement.
In addition, all new students are at Level I, even if they have brought 100 clock hours to graduate school from their undergraduate major. Also, students drop back one level when they are assigned a client or a placement that will involve their addressing a disorder that they have not yet encountered clinically.
For example, if a student at Level 3 has had experience with childhood language and phonological disorders and is about to enter a placement with a population of adults with voice disorders, he or she enters that experience at Level 2 and evaluation of the student's performance is based on the expectations of a Level 2 clinician.
Supervisors then assess students' clinical performance in two domains: the interpersonal and the professional/technical. Each of these domains has a separate rating scale.
The Interpersonal Scale comprises one objective and 19 competencies that together serve as an operational definition of the objective.
Objective I: Develops practices that support professional excellence. Examples of the competencies that define this objective include
The Professional/Technical Scale comprises five additional objectives and a total of 45 competencies that also operationalize these objectives.
Objective II: Demonstrates careful thought and planning. Among the six competencies here are
Objective IV: Conducts systematic diagnostic sessions. It has three subsets with a total of 17 competencies that relate to the interview, testing, and presentation of findings.
Objective V: Demonstrates clinical writing skills. Among the six competencies that define Objective V are
Assessments take place at two points, at midterm and at the end of the placement; both are noted on the form. Ratings for the students' midterm grade are based on their level of performance at that time. Ratings for students' final grades are based on their performance of the competency at that ability level at least 75% of the time for approximately the last third of the time in the placement.
I would like to close with a couple of additional comments. First, as you might imagine, using this form does take time. Our clinical faculty does use it for students internal placements; supervisors in external placement sites are afforded a choice of using this form, a shorter form that includes many of the same concepts as this one (Graduate Student Trainee Rating Scale included in Appendix B), or a form of their own design.
Further, part of the multidimensionality of our program' s assessments
of student clinicians emerging skills (and unrelated to the assessment
tool just described) involves students' assessments of their own
clinical skills in a placement (Clinician's Self Evaluation form, included
as Appendix C). This assessment is then compared to that of the supervising
faculty member as a way to foster students' self-evaluation abilities.
Students evaluate their external placement experiences and the supervision
they received as well (Evaluation of Clinical Supervisor form, included
as Appendix D).
Our director of clinical education conveys this information back to supervisors in these sites as a way to address ongoing improvement of the clinical education of graduate students. In summary, then, both the forms for evaluating students' clinical skills as well as the overall process of evaluating students' clinical education are multidimensional.
I would like to acknowledge the considerable talent and work that Emerson College's clinical faculty have brought to the development and on-going modifications of the assessment procedures and tools that are used in developing the clinical skills of our graduate students.
The clinical faculty in Communication Sciences and Disorders at Emerson College continues to develop this tool and currently is gathering data on the reliability of the scoring system. This particular aspect of our tool is taken directly from a tool that has been used at the University of Wisconsin.
References Cited and Additional Resources
Angelo, T. A., & Cross, K.P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco: Jossey Bass Publishers.
Hallowell, B. (1996). Measuring educational outcomes. Proceedings: Seventeenth annual conference on graduate education . In C. M. Scott, R. Dalston, E. McNiece, D. Nash, & D. Sorenson (Eds.) pp. 37-44. Minneapolis, MN: Council of Graduate Programs in Communication Sciences and Disorders.
Hallowell, B., & Lund, N. (1998). Fostering program improvements through a focus on educational outcomes. Proceedings: Annual conference on graduate education . P. Murphy, R. McGuire, & L. Bliss (Eds.) (pp. 32 -61.) Minneapolis, MN: Council of Academic Programs in Communication Sciences and Disorders.
Palomba, C. A, & Banta, T.W. (1999). Assessment essentials: Planning, implementing, and Improving assessment in higher education. San Francisco: Jossey Bass Publishers.
Oberlin College (1995). A (very) brief profile of the class of 1995.
University of Toledo (1997). Learning goals for undergraduates,
and assessment of undergraduates outcomes performance
Truman State University (1998). The assessment almanac1998. http://www2.truman.edu/assessment98