A Philosopher's Blog

Adventures in Assessment

Posted in Universities & Colleges by Michael LaBossiere on August 22, 2017

I’ve fallen behind on my usual schedule of posting and replying to comments. The reason is, of course, my adventures in the realm of assessment. This began in 2004 when I was assigned to eternal membership on the General Education Assessment Committee. I am now a co-chair of the committee. I was also assigned to do the unit assessment for philosophy and religion. The basic idea of assessment is to assess using various direct and indirect measures. As might be imagined, this has little to nothing to do with philosophy, although I do try to sneak in the occasional philosophical bits. These are, as you might guess, typically edited out when the documents are reviewed.

My university is in the process of re-accreditation, something all schools do on a regular basis. My task is to complete a major document for a specific standard–the document is currently at 11, 184 words.

I have some upcoming essays that I hope to complete tomorrow and perhaps the assessment grind will permit me to get back to my usual writing a reply cycle. And, you know, teaching and stuff.

But, here is a look at what sort of stuff I write for assessment with some philosophy. Philosophy that will be excised in the final version, of course.

Overview of Target Levels and Measure of Success

The establishment of target levels and measuring competence requires addressing two basic concerns. One is determining what counts as competence in each assessed area.  The second is setting a percentage goal for student competence.

The second is easy to address. In the United States educational system (broadly construed), 70% has been established as the minimal level of adequacy. As such, adopting the broad standard that 70% of the students assessed will perform at a level of adequate competency or better is justified by this established measure. Justification for this measure, in general, can be sought in whatever theoretical, practical and philosophical foundations were used to make this the national standard. The first is rather more challenging to address.

Justifying a standard of competence is difficult because of an epistemic problem raised by the ancient Greek Skeptics. If a standard is not self-justifying, it must be justified. If the justification is not self-justifying, it must be justified. Philosophically, this must lead to either a regress (infinite or circular) or a self-justifying foundation. As there seem to be no self-justifying foundations for standards, the regress problem wins the day and all standards are ultimately arbitrary. Fortunately, there is a pragmatic solution to this problem: presenting a plausible narrative for the standards that convinces the relevant authorities to accept them. This is what follows.

To measure the competence of an individual student in an assessment area, there must be an established standard of what counts as competent. To use the obvious analogy, to measure the height of a person, there must be an established and consistent means of measuring. One way to define competence in education is in terms of how the average student performs in that area. This is analogous to sorting out what is “normal” height—it is based on what is average in the relevant population.  As such, assessing the competence of Florida A&M University students required knowing the national average for comparable students in the relevant competency areas. To this end, the ETS Proficiency Profile (EPP) was utilized to set the standard—specifically the national mean. This standard is used in the areas the EPP tests: Communication, Critical Thinking, and Quantitative Reasoning. Since this method is accepted by the relevant authorities in assessment, it is justified.

While the use of standardized tests solves some of the assessment problems, it does not solve all of them. Specifically, it does not solve the problem of assessing areas that are not well-covered by standardized tests (such as Social/Ethical Responsibility) and it does not solve the problem of assessing individual artifacts, such as philosophy papers. Fortunately, there is an established solution to this problem, namely the use of rubrics. The main challenge with a rubric is developing it so that it properly and consistently sorts students into the specified levels of competence. While all rubrics are flawed in some manner, Florida A&M University began in 2004 with established rubrics from other universities and refined them over the years in accord with both national and local findings to ensure that best practices were being used. Since these rubrics are accepted by the experts in the field of assessment, they are justified as means of assessment.

Other methods of assessment, such as focus groups and surveys, are also established as accepted methods by the relevant experts in the field of assessment. These methods are, of course, crafted and deployed in accord with the best-practices as established by the relevant experts in the field. Thus, these methods are also justified.

Advertisements
Tagged with:

11 Responses

Subscribe to comments with RSS.

  1. Fabio Escobar said, on August 22, 2017 at 8:36 pm

    Thank you for sharing this thoughtful assessment report, Michael. I too am a philosopher (SUNY Buffalo 2006) and also work in assessment. My sense here is that you are in fact violating one of the central tenets of assessment in asserting that an assessment is acceptable if it is accepted by the “relevant authorities in assessment.” I would actually argue that the relevant authorities are you and your colleagues, not anyone else. Further, I would argue that your mode of presenting the report in this case will in fact raise a red flag to most accreditors. What they look for is not just that people are conducting assessment, but that assessment results are in fact useful to practitioners and actively being used to improve instruction or improve student performance. I think you’ll fail by that standard, so as a piece of professional advice from someone who has been through my share of accreditation experiences, I recommend that you reconsider the approach.

    However, I do think you raise a very important epistemological point here, and this is a point that has to be discussed not just as a philosophical curio, but as a question of how we can justify our shared epistemological foundations. When you point out that the entire exercise of setting standards raises an infinite regress problem, you are giving voice to a problem in the academy that just about every philosopher will notice. The act of setting arbitrary standards to cut off the regress seems anathema to most of us, and since most of us hopefully seek to be careful and serious philosophers we are loathe to take arbitrary steps.

    But perhaps there’s another approach here. Why can’t the standards be set by appropriate conversations with our colleagues? After all, when you assess Ancient and Medieval Philosophy you have a course outline and a set of course outcomes upon which you rely. One of those outcomes is presumably something like “Students shall develop an understanding of Aristotle’s Metaphysics.” While that’s not exactly something like a numerical goal, it does set you up for establishing numerical targets that you and your colleagues can determine. How would you do so? Not primarily, I would argue, on the basis of what some external expert might recommend (were there any such experts who set such targets for us!), but instead based on what you think the market will bear. Let me explain.

    I will assume, for the sake of this conversation, that you teach this course and that you have a couple of other colleagues who teach this course from time to time. Let’s also say that you have all agreed to teach Aristotle’s Metaphysics, just as the course outline says that you should. So, teach it you do, and each of you evaluates students in his or her own way. Perhaps you all use tests, but maybe you use different tests. Or, you use papers, but with different prompts and formats. This diversity, of course, is perfectly OK in the academy and is in fact a healthy diversity. It does create a challenge when we conduct assessment, but we wouldn’t want to give up the diversity solely to meet an assessment need. So what do we do?

    May I suggest that the appropriate response to this is not to look for an external standard or to look for a “blessed” rubric, but instead to converse with your colleagues about the few things that all of you agree to measure about your students’ understanding of Aristotle? Once you agree you’ve essentially created your own rubric (or test, if that’s your favored approach) and you can collect data. From here, the script writes itself: data are collected, you get together once more to discuss the results, and then wonder (a) whether you did a good job in building the assessment protocol; and (b) whether, if you are satisfied with the protocol, your students learned all that you hoped they would learn. You take minutes at this meeting and perhaps agree to some next steps, such as a stepped up focus on substance or being-qua-being. You leave, having drafted the minutes and deposited them in an archive, and agree to meet again after you’ve all taught the class once more over the next couple of cycles.

    Now, what I have described above is collegial, data-driven, can be done over coffee, and relatively simple. Your local institutional research office can probably even do the data collection for you (if you have a good IR office, at least) and you can focus on what matters in this whole thing: encouraging collegial exchanges with your peers that drive the improvement of instruction and ensure that students are learning what we hope they will learn.

    I take the time to write all this not because I consider myself a proselytizer for assessment, mind you, but because I believe what I have just described is just a really neat way of being an academic philosopher! Surely there are a few jobs better than philosophy in the academy, but I haven’t come across more than just a couple. And one of the coolest things about the field is that we get to talk to each other and try to pass on ideas (I hesitate to write “knowledge,” here, being cognizant of the aforementioned epistemological circle).

    Anyhow, it does seem to me that everything I’ve described is really wholly within the sphere of philosophy performed alongside our peers. I mostly wanted to try to demonstrate that it’s possible to be a philosopher and see oneself as acting as a philosopher precisely when we are in fact doing assessment. After all, isn’t that the moment in our lives when we are most self-critical, and isn’t that at the heart of those famous Greek dicta to know ourselves and examine our lives? It seems me to that the spirit of Socrates is alive and well in assessment.

    • WTP said, on August 23, 2017 at 7:06 am


      My sense here is that you are in fact violating one of the central tenets of assessment in asserting that an assessment is acceptable if it is accepted by the “relevant authorities in assessment.” I would actually argue that the relevant authorities are you and your colleagues, not anyone else. Further, I would argue that your mode of presenting the report in this case will in fact raise a red flag to most accreditors. What they look for is not just that people are conducting assessment, but that assessment results are in fact useful to practitioners and actively being used to improve instruction or improve student performance.

      If I may…and forgive me as I don’t have much time to totally flush this out as I have to go to work to generate the tax revenue on which all of this rides…I would submit that this approach may possibly be worse than Mike’s. The idea that the authority on a subject would be the very people deciding what the authoritative material should be will lead to the very inbreeding of ideas that has brought our academic institutions to the absurd condition that they are in today.

      assume, for the sake of this conversation, that you teach this course and that you have a couple of other colleagues who teach this course from time to time. Let’s also say that you have all agreed to teach Aristotle’s Metaphysics, just as the course outline says that you should.

      Herein lies your problem, right from the start. The course outline being generated by authorities who all agree with each other. Here’s a suggestion. Go out in the world and find people with completely different thoughts on Aristotle. Find those who disagree with him. Biologists, feminists (God help you there), Catholics, Jews, evangelical Christians, etc. Obviously you can’t absorb all of the criticisms but there is a broad philosophical disagreement on every subject. And yes, some perspectives are ridiculous and not worthy of one’s time. And some that are not ridiculous yet it may be hard to find supporters who can maintain a rational, honest discussion. But try. Especially with the latter group as you may find out more about your home perspective than you might initially believe. But make an active, conscious effort to do so. Use this information to baseline/outline your courses. Otherwise your just making copies of copies of copies. Even the best copying machines don’t do that well so why should we expect human institutions to work that way?

      Philosophy being quite a difficult subject to test in the real world without it getting redefined as something that is not philosophy, my guess is this is the best you can do. This is the best I can do right now as I have work to do.

    • Michael LaBossiere said, on August 24, 2017 at 6:44 pm

      Not to worry, I edited out all the philosophy stuff in the actual report. The full document contains the relevant paragraphs about how the results are used to close the assessment loop and so on. I’ve been through accreditation several times and know what I should and should not say. But, thanks for the friendly warning.

      • WTP said, on August 24, 2017 at 7:49 pm

        Stunning. Absolutely stunning. Someone criticizes Mike for saying something wrong and then says something wronger. I then points out that Mike ain’t so wrong, that the criticism itself is wronger. Then Mike jumps in to say, no, no, no. No worries, he’s actually hip with the wronger.

        God help me TJ, I’m not long for this world.

  2. TJB said, on August 23, 2017 at 3:10 pm

    Trying to “assess” philosophy and religion is nonsense. The rational thing to do is to follow the path of least resistance and waste as little time as possible.

    • WTP said, on August 23, 2017 at 5:50 pm

      That’s some corporate experience talking there, that is.

      • TJB said, on August 24, 2017 at 1:09 am

        37 years of working in big organizations…

        • WTP said, on August 24, 2017 at 9:15 am

          37. Navy included, I take it?

          • TJB said, on August 24, 2017 at 7:52 pm

            Yes, definitely. Learned many lessons in USN.

  3. WTP said, on August 25, 2017 at 8:47 am

    Wonder where these guys would fall on the assessment scale. Might even be FAMU students. Might even be Mike’s students.

    https://mysterytacklebox.com/blog/lets-debate-are-fish-really-wet-while-theyre-underwater/

    I believe this is what life was like before the Hindus invented the zero. Every day. This argument. And then one day…

    • TJB said, on August 25, 2017 at 9:13 am

      Math happened.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: