A Philosopher's Blog

Student Evaluations of Faculty

Posted in Philosophy, Universities & Colleges by Michael LaBossiere on January 9, 2015

While college students have been completing student evaluations of faculty since the 1960s, these evaluations have taken on considerable importance. There are various reasons for this. One is a conceptual shift towards the idea that a college is primarily a business and students are customers. On this model, student evaluations of faculty are part of the customer satisfaction survey process. A second is an ideological shift in regards to education. Education is seen more as a private good and something that needs to be properly quantified. This is also tied into the notion that the education system is, like a forest or oilfield, a resource to be exploited for profit. Student evaluations provide a cheap method of assessing the value provided by faculty and, best of all, provide numbers (numbers usually based on subjective assessments, but pay that no mind).

Obviously enough, I agree with the need to assess performance. As a gamer and runner, I have a well-developed obsession with measuring my athletic and gaming performances and I am comfortable with letting that obsession spread freely into my professional life. I want to know if my teaching is effective, what is working, what is not, and what impact I am having on the students. Of course, I want to be confident that the methods of assessment that I am using are actually useful. Having been in education quite some time, I do have some concerns about the usefulness of student evaluations of faculty.

The first and most obvious concern is that students are, almost by definition, not experts in regards to assessing education. While they obviously take classes and observe (when not Facebooking) faculty, they typically lack any formal training in assessment and one might suspect that having students evaluate faculty is on par with having sports fans assessing coaching. While fans and students often have strong opinions, this does not really qualify them to provide meaningful professional assessment.

Using the sports analogy, this can be countered by pointing out that while a fan might not be a professional in regards to coaching, a fan usually knows good or bad coaching when she sees it. Likewise, a student who is not an expert at education can still recognize good or bad teaching.

A second concern is the self-selection problem. While students have access to the evaluation forms and can easily go to Rate My Professors, students who take the time to show up and fully complete the forms or go to the website will tend to have stronger feelings about the professor. These feelings will tend to bias the results so that they are more positive or more negative than they should be.

The counter to this is that the creation of such strong feelings is relevant to the assessment of the professor. A practical way to counter the bias is to ensure that most (if not all) students in a course complete the evaluations.

Third, people often base their assessments on irrelevant factors about the professor. These include such things as age, gender, appearance, and personality. The concern is that this factor makes evaluations a form of popularity contest: professors that are liked will be evaluated by better professors who are not as likeable. There is also the concern that students tend to give younger professors and female professors worse evaluations than older professors and male professors and these sorts of gender and age biases lower the credibility of such evaluations.

A stock reply to this is that these factors do not influence students as strongly as critics might claim. So, for example, a professor might be well-liked, yet still get poor evaluations in regards to certain aspects of the course. There are also those who question the impact of alleged age and gender bias.

Fourth, people often base assessments on irrelevant factors about the course, such as how easy it is, the specific grade received,  or whether they like the subject or not. Not surprisingly, it is commonly held that students give better evaluations to professors who they regard as easy and downgrade those they see as hard.

Given that people generally base assessments on irrelevant factors (a standard problem in critical thinking), this does seem to be a real concern. Anecdotally, my own experience indicates that student assessment can vary a great deal based on irrelevant factors they explicitly mention. I have a 4.0 on Rate my Professors, but there is quite a mix in regards to the review content. What is striking, at least to me, is the inconsistencies between evaluations. Some students claim that my classes are incredibly easy (“he is so easy”), while others claim they are incredibly hard (“the hardest class I have ever taken”). I am also described as being very boring and very interesting, helpful and unhelpful and so on. This sort of inconsistency in evaluations is not uncommon and does raise the obvious concern about the usefulness of such evaluations.

A counter to this is that the information is still useful. Another counter is that the appropriate methods of statistical analysis can be used to address this concern. Those who defend evaluations point out that students tend to be generally consistent in their assessments. Of course, consistency in evaluations does not entail accuracy.

To close, there are two final general concerns about evaluations of faculty. One is the concern about values. That is, what is it that makes a good educator? This is a matter of determining what it is that we are supposed to assess and to use as the standard of assessment. The second is the concern about how well the method of assessment works.

In the case of student evaluations of faculty, we do not seem to be entirely clear about what it is that we are trying to assess nor do we seem to be entirely clear about what counts as being a good educator. In the case of the efficacy of the evaluations, to know whether or not they measure well we would need to have some other means of determining whether a professor is good or not. But, if there were such a method, then student evaluations would seem unnecessary—we could just use those methods. To use an analogy, when it comes to football we do not need to have the fans fill out evaluation forms to determine who is a good or bad athlete: there are clear, objective standards in regards to performance.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

4 Responses

Subscribe to comments with RSS.

  1. ajmacdonaldjr said, on January 9, 2015 at 2:30 pm

    Experience is always subjective. A student’s experience of taking classes with a professor is always subjective. I would take any such evaluations with a grain of salt.

  2. nailheadtom said, on January 11, 2015 at 1:36 pm

    “To use an analogy, when it comes to football we do not need to have the fans fill out evaluation forms to determine who is a good or bad athlete: there are clear, objective standards in regards to performance.”

    Are we comparing the professor to a coach or an athlete? It doesn’t matter, really, a team sport is a bad analogy. Coaches not only educate their players to the techniques and nuances of the game, they also attempt to control their play during the game itself. And coaches are notoriously bad judges of performance, as the newer forms of evaluation such as Sabermetrics have shown by being quickly adopted by coaches.

    Realistically, the goal of the college student is to graduate quickly with high grades at minimal effort and move on with life. Instructors that facilitate that goal are rated highly, those that impede it, even if they dispense valuable knowledge, aren’t rated as highly. Certainly there are other factors involved but that’s the nut. If students could get a degree through a surgical implant there wouldn’t be any more classes.

    • Michael LaBossiere said, on January 12, 2015 at 2:38 pm

      True-implanted skills would probably end most schools. Why spend all that time and money learning when you could just pop in a mod or download a skillset?

  3. […] of problems with typical student evaluations of professors (see here and here, for example, and this post by Michael LaBossiere at A Philosopher’s Blog), but they are probably here to stay. They can […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: