You might be beginning to construct your fall syllabi or perhaps you already have. You likely will use the term “assignment” or “assessment” a fair amount. Am I right? But the word can mean more than assignments. Let’s talk about it.
Once the academic year begins, “assessment” is a word that often dominates department meetings. One of the tensions which can arise is that the word can mean such different things to different people, and to people in different levels of the university. Let’s think through the complexities of the word in academe and why it is hard to reach a consensus about this concept.
At the most basic level, educational assessment asks, “How can you prove that an individual or a group of people is/are learning?” So assessment can operate on several units of analysis: the individual student, groups of students (perhaps a class), how a set of classes fit together, and if the entire program is successful in passing on the content to its learners that it is designated to do.
Assessments need to be designed differently at each level. Faculty are more comfortable with the individual level because we do this all the time when we create assignments for individual students to show us what they have learned. Often we create rubrics that assign points for showing comprehension of critical concepts or theories or vocabulary/formulas which are crucial for students to learn. We write test questions to do the same. Or we design application-type assessments which ask students to use concepts and theories in real-world situations. These types of assessments often ask students to use their critical thinking, writing and language skills, via specially designed writing tasks.
Fewer faculty consistently use this individual-level data to look for group-level patterns. This could be doing item analysis of test questions to see if any were missed by so many students that how the question was worded might be more to blame than a lack of student learning. Or it might be analyzing all questions about the same reading and how did students do on them, to give the faculty member feedback about readings being used. Another way to do item analysis is to incorporate questions about key concepts into every test and see if students’ success rate goes up as they become more comfortable using the conceptual content. If it doesn’t, then again, that may say more about the ways that it was taught than the learning of the students.
Program assessment examines not only the amount of content learned by students but how the courses fit together (or don’t) to meet a (hopefully) cohesive set of learning goals and objectives. Gathering data from students is often done in senior capstone courses (exit questionnaires), individual student advising sessions, and at prescribed intervals post-graduation. The latter asks graduates to look back at the program and give feedback about the cohesiveness, intentionality, usefulness in the labor market, and trouble spots in the curriculum. Students and recent graduates often perceive a curriculum differently than the faculty who designed it and teach in it.
Here’s one example from my former program. Sociological theory was a 300-level course (so typically a junior-level course) but more than 90% of students took it in the last two semesters (usually the last one) before graduation. When they completed exit surveys and post-graduation assessments, they often lamented taking theory so late, stating that if they had taken it earlier, many other classes might have been easier. In our senior capstone, we decided to not only ask students about courses, registration bottlenecks, etc., but we also asked them to mindmap how they took their courses AND to mindmap what their ideal schedule of courses by semester through the degree program would look like. Then they explained why they constructed the latter. Consistently they put theory earlier than when they had actually taken it. So we shifted the course even earlier and made it required in the junior year instead of suggested. Time to graduation decreased and the percent of students who completed the degree increased.
The “trouble” with doing assessments is two-fold. First, it is difficult to write assessment questions that give meaningful feedback. Many assessment use Likert-scale responses which do not offer insights into the complexities of learning or program effectiveness. But if open-ended questions are used, the time needed to code and analyze the data can be significant and often the answers provide opportunities for venting more than solid reflection. Second, unfortunately, much assessment is done to persuade some other, usually outsider, that things are going fine rather than digging deep to learn what truly is going on. “Happy” reports are written; maybe they are read; definitely they are filed away, only for the process to be repeated over and over again.
So if you are tasked collectively with constructing an assessment instrument, spend the time to talk through what each person means by the word; ask why people feel an assessment is needed at this time; and does your audience want the authentic data? Is your audience prepared for reality-based conclusions or will the report be used in more of a PR-manner?
Because given all that is happening in higher education now–COVID teaching stresses for a third year, many institutions unsure about requiring masks given the strength of the newest variant, and deep budget cuts–maybe writing a quick and dirty, “happy talk report” is the most strategic thing to do.
But if you honestly want to know what kind of learning is happening in and outside of your classroom, be intentional with the kinds of assessments you create. Give students a mid-term evaluation where they can give you meaningful feedback and accept it as their reality. Make changes that you feel are educationally appropriate and see how the rest of the term goes.
Please visit the Pedagogical Thoughts website to contact me about helping you to strengthen your writing, be it as a faculty member or a dissertation writer.