Fostering Program Improvements
Through a Focus on Educational Outcomes
Brooke Hallowell, Ph.D., Ohio University
Nan Lund, Ph.D., Buffalo State College
At the 1996 Council of Graduate Programs, we discussed educational outcomes in the context of a nationwide movement for accountability in higher education. The impetus for outcomes assessment has come most recently from accrediting agencies. All regional accrediting bodies, such as North Central, Middle States, or Southern Associations of Colleges and Schools, and professional accrediting bodies including ASHA, receive their authority by approval of the Council for Higher Education Accreditation (CHEA), which assumed this function from the Council on Recognition of Postsecondary Accreditation (CORPA) in 1996. The inclusion of outcomes assessment standards as part of accreditation by any of these bodies is mandated by CHEA, and thus is a requirement for all regional as well as professional accreditation. This means that applications for accreditation must demonstrate that institutions or programs have plans for assessing educational outcomes, and can show evidence that the results of these assessments have led to improvement of teaching and learning and, ultimately, better preparation for entering the professions. Accrediting bodies have thus revised, or are revising standards for accreditation that move away from "input" models to examination of "output" that students can demonstrate. Of particular interest to this group is the adoption of new standards (effective in 1999) by the Council on Academic Accreditation of the American Speech-Language Hearing Association that addresses outcomes assessment practices.
While the push to engage in such assessment may have come from external forces, such as legislatures, campus presidents, or accrediting bodies, the impetus for attention to assessing what students are learning came originally from within the academy. The publication of Involvement in Learning (Study Group on the Conditions of Excellence in Higher Education, 1984), written by a group of distinguished academics, addressed the need for reform in higher education in order to assure and demonstrate that the college experience makes a positive difference. Several influential publications followed in the same vein such as the Carnegie Foundation’s examination of the undergraduate experience (Boyer,1987); and An American Imperative: Higher Expectations for Higher Education (Wingspread Group on Higher Education, 1993). A common conclusion of all of these analyses is the need for better assessment and feedback to effect educational improvement. Combined with these internal calls for self-examination and improvement, external constituencies have increasingly demanded more accountability. As higher education has come under increasing attack from politicians and the press, there is a need to demonstrate that a college education is, indeed, a good investment of state and personal resources.
Where are we?
The focus on educational outcomes assessment in the early and mid-1990s has been primarily on undergraduate programs; this may be why programs in Communicative Sciences and Disorders (CSD) are relatively late becoming involved in campus programs for identifying and assessing outcomes. In 1993, a survey of all ASHA accredited programs in speech-language pathology and/or audiology was conducted to determine the degree to which they were involved in outcomes assessment (Lund, 1993). There was a 52% return rate, or responses from 108 programs. At that time, 66% of the responding programs indicated that they were required to engage in outcomes assessment, with the requirement coming from either the regional accrediting body or the state in most cases. Of the 71 programs under such mandates, 33 (46%) had identified outcomes, while 53 (75%) were engaging in assessing their students. This would seem to indicate that many of us were putting the assessment cart before the outcomes horse. As Dr. Banta has pointed out, valid assessment begins with setting learning goals. Further, it was interesting to note that 58% of these programs indicated that their program was modified based on assessment. In contrast, program modifications based on assessment results were reported by 78% of the 37 reporting programs not required to assess learning outcomes. This raises the question of how instrumental the assessment activities really were in decisions about program modifications.
This is not to imply that CSD programs are not, and have not been, engaged in assessing what our students learn. When programs were asked to indicate the nature of their assessment tools, the predominant measures were alumni surveys (89%), employer surveys (89%), National Examination in Speech Language in Audiology (NESPA) (88%), and student surveys (76%). These measures have been used by most of us for many years and are certainly useful in determining some aspects of our graduates’ proficiency. As we look for other measures, much of what we have learned in addressing clinical outcomes issues in speech-language pathology and audiology can be applied when addressing assessment of outcomes in our training programs. Because of our experience with principles of functional assessment and documentation of efficacy, professionals in our field are well positioned to implement outcomes assessment and often are several steps ahead of the administrators requiring implementation of an assessment plan (Hallowell, 1996). It appears from your responses to our more recent inquiries about assessment activities, that many more programs are now actively engaged in identifying desired outcomes, and developing or implementing plans for systematically assessing these outcomes. We are widening the scope of instruments and experiences being used, and, we hope, engaging in "rich conversation about learning" envisioned by Ted Marchese as quoted by Dr. Banta.
AGREEING ON TERMS
We agree with Dr. Banta that there is great variability in the terminology that we use to discuss educational outcomes, and that how we develop and use our assessments matters much more than our agreement on the definitions of each of the terms we might use to talk about assessment issues. Still, for the sake of establishing more common ground among program members as we continue these discussions, we have selected a few terms to highlight.
"Meaningful" outcomes assessment
One of the charges of the CGPCSD Working Group on Educational Outcomes, established last year, was to promote the development and dissemination of resource materials to assist member programs in developing their own "meaningful educational outcomes" assessment programs. Consequently, one point of ongoing discussion in our group is what we mean by that term "meaningful." What we’ve agreed upon is that, although what constitutes an "ideal" outcomes assessment program is largely dependent on the particular program and institution in which that program is to be implemented, there are at least some generalities we might make about what constitutes a meaningful program. For example, any outcomes assessment program perceived by faculty and administrators as an imposition of bureaucratic control over what they do, remote from any practical implications (McLaughlin & Muffo, 1993), would not be considered "meaningful." Meaningful programs, rather, are designed to enhance our educational missions in specific, practical, measurable ways, with the goals of improving the effectiveness of training and education in our disciplines. They also involve all of a program’s faculty and students, not just administrators or designated report writers. Furthermore, the results of meaningful assessment programs are actually used to foster real modifications in a training program.
Formative and summative outcomes
It can be helpful to distinguish between formative and summative outcomes indices. Formative outcomes measures are those that can be used to shape the experiences and learning opportunities of the very students who are being assessed. Some examples are surveys of faculty regarding current undergraduate or graduate students’ research involvement, clinical practicum performance evaluations, computer proficiency evaluations, and the kinds of classroom assessment techniques mentioned by Dr. Banta. We can take the results of such assessments and use them to characterize program or instructor strengths and weaknesses, but we can also use them to foster changes in the experiences of those very students who have been assessed.
Summative outcomes measures are those that are used to characterize programs (or college divisions, or even whole institutions) by using assessments intended to capture information about the final products of our programs. Examples are undergraduate and graduate student exit surveys, surveys of our graduates inquiring about salaries, employment, and job satisfaction, and surveys of employers of our graduates. The reason we find the distinction between these two types of assessment to be important is that, although formative assessments tend to be the ones that most interest our faculty and students and the ones that drive our daily experiences in our programs, the outcomes indices on which most of our administrators focus to monitor institutional quality are those involving summative outcomes. It is important that each of our programs strive for its own appropriate mix of both formative and summative assessments.
Cognitive/affective/performative outcome distinctions
In order to stimulate our clear articulation of the specific outcomes we are targeting within each of our programs, it is helpful to have some way of characterizing different types of outcomes. Although the exact terms vary from context to context, targeted educational outcomes are commonly characterized as belonging to one of three domains: cognitive, affective, and performative. Cognitive outcomes are those relating to intellectual mastery, or mastery of knowledge in specific topic areas. Most of our course-specific objectives relating to a specific knowledge base fall into this category. Examples are the ability to describe the phonatory process, list and define levels of language analysis, or describe forms of energy transduction in the auditory mechanism. Performance outcomes are those relating to a student’s or graduate’s accomplishment of a behavioral task. Examples are the administration and scoring of a particular articulation test, counseling of parents about a child’s new diagnosis of hearing loss, or successful use of a search engine on the internet. Affective outcomes relate to personal qualities and values that students ideally gain from their experiences during a particular educational and training program. Examples are appreciation of various racial, ethnic, or linguistic backgrounds of individuals, awareness of biasing factors in the interpretation of assessment results, and sensitivity to ethical issues and potential conflicts of interest in clinical practice.
The distinction among these three domains of targeted educational outcomes is helpful because it helps us to look at areas of learning that we often say are important but that we do not assess very well. Generally, we do a much better job of assessing our targeted outcomes in the cognitive area, for example, with in-class tests and papers, and with the NESPA, than we do with assessing the affective areas of multicultural sensitivity, appreciation for collaborative teamwork, and ethics. Often, our assessment of performative outcomes is focused primarily on students’ clinical practicum experiences, even though our academic programs often have articulated learning goals in the performative domain that might not apply only to clinical practice.
We have found it helpful to use the cognitive/affective/performative distinction to help some of our member program’s faculty think about how the current focus on outcomes differs greatly from the "competency" models of education that usually involve the reduction of complex functional goals into lists of specific performance tasks assessed via inventories or checklists (Kameoka & Lister, 1991). The limitations of such competency-based models were acknowledged officially by the Council of Graduate Programs in 1995, when a resolution was passed not to use the term "competency-based instruction" in accreditation guidelines. Historically, our fields have focused intensely on clinical "competencies," especially at the master’s level. Those performance-based competency checklists that so many of our programs have generated are still considered useful by many, but it is important that they be balanced with a host of other types of assessment instruments.
Addressing faculty motivation
A critical step in developing a meaningful educational outcomes program is to address head-on some pervasive issues of faculty motivation. As Dr. Banta so aptly summarized, the primary reasons that assessment programs in most disciplines at most institutions are thus far "disappointing" involve not only a lack of faculty involvement but also resistance among faculty and administrators. That resistance is probably due in large part to the perception that outcomes assessment involves the use of educational and psychometric jargon to describe program indices that are not relevant to the everyday activities of faculty members and students. By including our faculty, and perhaps some student representatives, in discussions of what would characterize a more meaningful assessment scheme to match the missions and needs of our individual programs, and by agreeing to develop our outcomes assessment practices from the bottom up, rather than in response to top-down demands from administrators and accrediting agencies, we are apt to win the support of some of our current skeptics on our faculties.
Additional factors that might give faculty the incentive to get involved in enriching assessment practices include:
Clarifying our targeted outcomes
Since we value what we assess, we should assess what we value. We all know the value that is assigned by us and by our students to those things that are visibly assessed by course grades, GRE scores, NESPA scores, for example. We also know that these are not the whole picture when it comes to professional competence. When we begin to identify desired outcomes, our natural impulse is to focus on those things that are easiest to assess even though these may not be the outcomes we consider most important. We risk acting like the drunk who lost his watch in the alley, but looks for it under the street light because the light is better there! Do we assess what is easiest to asses, and then value it because it is assessed?
The process of clarifying targeted outcomes is probably more important than the specific list of outcomes that is generated. It is an opportunity for the faculty to discuss their individual visions for their programs and identify where there is consensus or discrepancy concerning what is most important for students to know, accomplish, and experience. Involving current students, alumni, and employers at this stage insures that the outcomes are connected to professional practice trends and needs as well as reflect the scientific and research basis of the professions.
In reviewing the literature, we have come across several suggestions for getting started in the process of identifying desired outcomes; there are a few that come from the California Academic Press web site and are authored by Noreen Facione, University of California-San Francisco and Peter Facione, Santa Clara University.
It may be helpful to use a prepared form to get participants started in identifying outcomes. A copy of an instrument that may be used to get faculty members to articulate the outcomes that they hold as important, within at each level and major associated with your training programs, is presented in Appendix A. This is a form now being used at Ohio University. You are welcome to modify it as you wish. There are three sections on the instrument. Section I pertains to cognitive outcomes, II to performance outcomes, and III to affective outcomes. Each section (I, II and III) is divided into two parts, labeled A and B. In the parts labeled A within each section, faculty members are asked to add to the list of "general" outcomes, or ideal outcomes in each category that are not specific to a course or set of courses that they teach. In the parts labeled B within each section, they are asked to provide a list of course-specific outcomes that they think are particularly important in each of the courses and content areas in which they teach. It is important to ask faculty members to include targeted outcomes in all of the courses they teach, not just the ones they are currently teaching. Once they have added to the lists of targeted outcomes in each section, they may review the entire list and place checks within each column to indicate the training program(s) in which they think that each targeted outcome listed is important. Once the forms are returned by all faculty members, a designated student or faculty member can compile a master list of targeted outcomes that can then be used for the next phase of developing a richer outcomes focus... assessing whether we are assessing what we say is important.
Taking an inventory of what and how we are assessing now
A master list of detailed targeted outcomes is useful for helping to complete a review of what and how we are assessing now. A similar chart format can be used, this time presenting the master list our faculty members have generated, and then asking them to check those targeted outcomes they actually assess and to state the means by which they carry out such assessments. This provides a simple mechanism for the study of where our current formative assessment practices fall short within a given program.
Adapting the curriculum and teaching methods to match targeted outcomes
Often, when we respond to feedback from employers in our field and from our former students regarding what should be changed in our training programs, we respond by focusing on curricular revisions. Although curricular modifications are frequently in order, the articulation of program goals from an outcomes perspective redirects our focus onto modifications of the ways we teach, the types of experiences our students get within our curricula, and the techniques and instruments we use to assess the efficacy of those experiences in functional terms. Brilliant revisions in curricula do not necessarily lead to great progress in terms of student outcomes if we don’t do anything to change the nature, rather than just the content, of their experiences with us. Defining in empirical terms how it is that our students are to be changed by the process of their involvement in a course (or practicum experience, seminar, externship, or even an entire training program) takes our focus away from curricula and gives us more insight into actual learning and teaching processes.
The literature on employers’ desired skills and qualities in employees - in all fields - as well as our own experiences, tell us that the most highly sought qualities are not cognitive competencies, but rather fall into two general areas: critical thinking and problem solving, and communication. These are also often those qualities that we value most in our students, and most of us would agree that we want to have our graduates proficient in these areas. If we identify these as desired outcomes, we need to consider what opportunities students have in our program to develop and demonstrate these competencies.
It is clear that students can best develop their abilities in problem solving, critical thinking, and communication when we use teaching methods that involve experiential or active learning on the part of students, with faculty acting as facilitators rather than lecturers (e.g., clinical practica, collaborative learning, team-based approaches, problem-based learning, simulation exercises, and case study methods). There is a rich interdisciplinary literature pertaining to the effective measurement of higher-level outcomes, and we have many vehicles within our profession that have great potential for allowing us to share this information. We will mention a few such approaches briefly, and have included references addressing different approaches in the bibliography.
Writing to learn
College writing used to be seen as a skill that is learned - or at least taught - in English composition classes, with attention focused on whether students know how to construct and punctuate sentences. Writing across the curriculum has emerged on most campuses in the last decade. This pedagogical movement moves writing from a requirement to a tool for learning and is based on the premise that students learn by being actively involved with the subject matter. As students write about what they are learning in various classes, they make it their own and, simultaneously, they learn to be better writers. This recognizes writing is a learning activity as well as a communications activity.
Incorporating writing into all courses for our students gives them practice in thinking, provides them an apprenticeship in learning the discourse of the profession, and helps them learn the concepts behind the words. Students learn to think and write well by writing in localized contexts and by receiving responses from writers within those contexts. Buffalo State College has developed a Writing Across the Speech-Language Pathology Curriculum that assures our undergraduates have experience with writing various kinds of professional texts. All classes have writing requirements, and all writing that is evaluated is developed with feedback from the instructor. That is, comments on drafts are provided to help students learn how to improve their writing rather than just judge whether they know how to write when they come to us. A brief description of the requirement and a sample of a rubric that is used to give feedback is included in Appendix B.
Cooperative learning is a specific, structured, team-based pedagogical approach that may be adapted very well to complement targeted outcomes involving professional team work skills as well as knowledge of service delivery concepts (Hallowell, 1997). Much of the pedagogical research regarding the development, efficacy and validity of this approach is directly applicable to training in the health care professions. In the cooperative learning process, groups of three to five intentionally selected students work together on a well-defined learning task for the primary purpose of mastery of course content. Two criteria of a true cooperative experience are: (1) positive interdependence among student learners to foster cooperation on team learning tasks, and (2) individual responsibility and accountability on the part of each student (Cottell & Millis, 1994; Jaques, D.,1991; Tiberius, 1990). The experiences and means of evaluation must be carefully designed to meet those criteria.
Cases have long been used in professional preparation of physicians, nurses, business administrators, teachers, and many other professionals. A teaching case is a description of a situation, an episode of practice, a selection of reality, a slice of life, a story designed and presented as study material, an exercise, a puzzle, or a problem (Barnes, Christensen and Hansen, 1994). A case can be analyzed and discussed in a single class period, over several weeks, or be the focus of an entire course. The essence of the case method is having students participate actively in the investigation and evaluation of issues involved in the case, effectively express their findings and opinions, and participate with others in reaching decisions. Discussion is the predominant form of classroom interaction. The role of the instructor shifts from imparting facts and solutions to creating dilemmas and encouraging creative and reasoned solutions. Assessing student learning becomes part of the design of cases to assure the method is effective in meeting program goals.
Service-Learning is a specific pedagogical method that integrates community service into academic course work (Falbo, 1995). It is not merely volunteering service to complement the nature of a course. In true service-learning courses, several specific criteria are met, including that: (1) the service experience enhances students learning of the subject matter of the course, and (2) students’ knowledge and continued learning in the course discipline is directly related to the service being performed (Newell & Davis, 1988; Watters & Ford, 1995). Methods of reflection on the service experience and assessment of that reflection are vital to the Service-Learning approach.
Changing Roles for Instructors
Just as students are challenged by active participation in their learning, as required by analyzing cases, cooperative learning experiences, extensive writing, and service learning, so are teachers; it is no mystery why most college professors lecture. Using discussion requires us to give up the comfortable predictability and role of authority in the classroom. We have to be willing to listen to students go through the often messy process of learning, and resist the temptation to lead students to the "right" answer instead of have them explore various options. Most campuses have teaching support centers, or groups of faculty who are engaged in these approaches; it helps to have colleagues with whom to discuss assignments that engage students, and effective ways to use them to assess students’ learning. We hope that we can generate a forum for sharing ideas and experiences within speech-language pathology and audiology. If you have located or written cases or projects involving problems or situations that lend themselves to cooperative learning or have incorporated service-learning or other forms of these approaches, we hope you will share them with us, along with your experiences using them for fostering learning and assessing identified outcomes.
A CALL FOR YOUR PARTICIPATION
The need for additional assessment instruments
The urgency of developing vehicles for sharing of information about educational outcomes within our field is becoming increasingly apparent to those of us who are engaged in the enormous task of complying with new regulations from regional accrediting bodies which require the immediate design and implementation of educational outcomes plans. Some of us must also be prepared next year to meet the new ASHA standards by demonstrating our outcomes assessment activity. Our Working Group is attempting to find efficient ways to share of information among member programs to facilitate progress in establishing target outcomes, identifying assessment opportunities and instruments, and instituting curricular and pedagogical innovations to address the desired outcomes.
The outcomes in the cognitive realm generally are those for which we currently have the clearest definitions and means of assessment. While these exams are often used as summative measures at the completion of the degree program, they may also be formative when used at entry and/or at intervals during the program. Some examples of these measures are:
National Examination in Speech-Language Pathology and Audiology
Locally developed comprehensive exams
Items embedded in course exams
Pre-post tests to assess "value added"
Junior Rising exams
Student self evaluation of learning
Performative outcomes may also be assessed with summative measures, but they are often assessed throughout degree programs. Some performance measures include:
Evaluation of graduate students’ on-campus practicum performance
Evaluation of graduate students’ off-campus practicum performance
Evaluation of student preparation for clinical externship
Evaluation of graduates’ clinical fellowship experience
Accomplishment of clinical procedures expected for all graduates
Student evaluation of supervision and clinical site for externship experiences Survey of faculty regarding student research competence
Evaluation of writing samples
Evaluation of videotapes
Evaluation of presentations
Evaluation of case analysis
Evaluation of collaborative learning and team-based approaches
Evaluation of problem-based learning
Evaluation of simulation exercises
Peer evaluations; e.g., of leadership, group participation
Assessments of affective outcomes are generally embedded within other activities. These may include:
Student journals review
Supervisors’ evaluation of clinical interactions with diverse populations
Writing culturally-sensitive reports
Service-learning experience evaluations
Volunteer experience evaluations
Case analysis evaluations
Surveys of attitudes or satisfaction with program
Simulation exercises evaluations
Interviews with students
Peers’, supervisors’, employers’ evaluations
These lists are not exhaustive. If you have examples of these or other types of assessment instruments, we would be interested in learning about them so that we may provide other programs with models for their own instruments
The Queen seemed to guess her thoughts, for she cried, "Faster! Don’t try to talk!"
Not that Alice had any idea of doing that. She felt as if she would never be able to talk again, she was getting so much out of breath; and still the Queen cried "Faster! Faster!" and dragged her along. "Are we nearly there?" Alice managed to pant out at last.
"Now! Now!" cried the Queen. "Faster! Faster!". And they went so fast that at last they seemed to skim through the air, hardly touching the ground with their feet, until suddenly just as Alice was getting quite exhausted, they stopped, and she found herself sitting on the ground breathless and giddy.
Alice looked around her in great surprise. "Why, I do believe we’ve been under this tree the whole time! Everything’s just as it was!"
Our students would empathize with Alice; in our programs we urge them to do more, do it sooner, and do it faster than they would ever have thought to be possible. `The goal of outcomes assessment is to make sure that when they complete the exhausting journey, they are not still where they started and everything is as it was.
American Association for Higher Education (1992). Principles of Good Practice for Assessing Student Learning. Washington, D.C.: AAHE.
Barnes, L.B., Christensen, C.R., and Hansen, A.J. (1994). Teaching and the Case Method. 3rd Ed. Boston: Harvard Business School Press.
Boyer. E.L. (1987). College: The Undergraduate Experience. Carnegie Foundation for the Advancement of Teaching. New York: Harper Collins.
Cottell, P.G. & Millis, B.J. (1994). Complex cooperative learning structures for college and university courses. In To improve the Academy: Resources for students, faculty, and institutional development. Stillwater, OK: New Forums Press.
Falbo, M. C. (1995). Serving to learn: A faculty guide to service-learning. Cleveland, OH: Center for Community Service, John Carroll University.
Hallowell, B. (1996). Innovative models of curriculum/instruction: Measuring educational outcomes. In Council of Graduate Programs in Communication Sciences and Disorders, Proceedings of the seventeenth annual conference on graduate education, 37-44.
Hallowell, B. (1997). Innovative teaching methods to enhance training in managed care issues. Journal of Allied Health, 26, 1.
Jaques, D. (1991). Learning in groups, 2nd edition. Guilford, Surrey, England: Society for Research into Higher Education.
Kameoka, V. & Lister, L. (1991). Evaluation of student learning outcomes in MSW programs. Journal of Social Work Education, 27 (3), 251-257.
Kubiszyn, T. & Borich, G. (1993). Educational testing and measurement. (4th ed.) New York: Harper Collins College Publishers.
Lund, N.J. (1993). Assessing Student Outcomes in Speech-Language Pathology and Audiology. ASHA, Anaheim, CA.
Mayhew, P. H. & Simmons, H. L. (1990). Assessment in the middles states region. NCA Quarterly, 65 (2), 175-179.
McLaughlin, G., Muffo, J. & Calhoun, H. D. (1993). Assessment: A collaborative model. (EDRS Publication No. ED 360 927). Washington, DC: US Department of Education.
Newell, W. H. & Davis, A. J. (1988). Education for citizenship: The role of progressive education and interdisciplinary studies. Innovative Higher Education, 13 (1), 27-37.
Tiberius, R. G. (1990). Small group teaching: A trouble-shooting guide.
Watters, A. & Ford, M. (1995). A guide for change: Resources for implementing community service writing. New York: McGraw Hill.
Wingspread Group on Higher Education, (1993). An American Imperative: Higher Expectations for Higher Education. Racine, Wis.: Johnson Foundation.
SAMPLE INTERNET RESOURCES
www.aahe.org; American Association of Higher Education
www.colorado.edu.outcomes; University of Colorado at Boulder
www.enmu.edu/~testaa; University of Eastern New Mexico
www.vag.edu; Virginia Assessment Group
HEPROC (Higher Education Processes Conference Hall); e-mail discussion lists
ASSESS-L (listserve at University of Kentucky) assessment discussion forum