Wednesday, August 6, 2014

Grade Inflation: Evidence from Two Policies

Grade inflation in U.S. higher education is a disturbing phenomenon. Here's a disribution of grades over time as compiled by Stuart Rojstaczer and Christopher Healy, and published a couple of years ago in "Where A Is Ordinary: The Evolution of American College and University Grading, 1940-2009," in the  Teachers College Record  (vol. 114, July 2012, pp. 1-23).




They offer a discussion of causes and consequences of grade inflation. The causes include a desire for colleges to boost the post-graduate prospects of their students and the desire of faculty members to avoid the stress of arguing over grades. The consequence is that high grades carry less informational value, which affects decisions by students on how much to work, by faculty on how hard to prepare, and by future employers and graduate schools on how to evaluate students. Here's a link to a November 2011 post of my own on "Grade Inflation and Choice of Major." Rojstaczer and Christopher Healy write:
Even if grades were to instantly and uniformly stop rising, colleges and universities are, as a result of five decades of mostly rising grades, already grading in a way that is well divorced from actual student performance, and not just in an average nationwide sense. A is the most common grade at 125 of the 135 schools for which we have data on recent (2006–2009) grades. At those schools, A’s are more common than B’s by an average of 10 percentage points. Colleges and universities are currently grading, on average, about the same way that Cornell, Duke, and Princeton graded in 1985. Essentially, the grades being given today assume that the academic performance of the average college student in America is the same as the performance of an Ivy League graduate of the 1980s.
For the sake of the argument, let's assume that you think grade inflation is a problem. What can be done? There are basically two policies that can be implemented at the level of a college or university. One policy is for the college to pass a policy clamping down on grades in some way. The other is for the college to provide more information about the context of grades: for example, by providing on the student's transcript both their own grade for the course and the average grade for the course. Wellesley College has tried the first approach, while Cornell University has tried the second.

Kristin F. Butcher, Patrick J. McEwan, and Akila  Weerapana discuss "The Effects of an Anti-Grade-Inflation Policy at Wellesley College," in the Summer 2014 issue of the Journal of Economic Perspectives. (Full disclosure: I've been Managing Editor of the JEP since the inception of the journal in 1987.) They write: " Thus, the College implemented the following policy in Fall 2004: average  grades in courses at the introductory (100) level and intermediate (200) level with
at least 10 students should not exceed a 3.33, or a B+. The rule has some latitude.  If a professor feels that the students in a given section were particularly meritorious,  that professor can write a letter to the administration explaining the reasons for the  average grade exceeding the cap."

Here's a figure showing the distribution of grades in the relevant classes across majors, relative to the 3.33 standard, before the policy was enacted from Fall 1998 to Spring 2003. As the authors point out, the higher-grading and lower-grading department tend to be much the same across colleges and universities.

After the policy was put in place, here is how the path of grades evolved, where the "treated departments refers to departments that had earlier been above the 3.3 standard, and the "untreated departments are those that were already below the 3.3 standard. Overall, the higher grading departments remained higher-grading, but the gaps across departments were no longer as large.  

In the aftermath of the change, they find that students were less likely to take courses in the high-grading departments or to major in those departments. For example, economics gained enrollments at the expense of other social science departments. In addition, student evaluations of teachers dropped in the previously high-grading departments.

Talia Bar, Vrinda Kadiyali, and Asaf Zussman discuss "Grade Information and Grade Inflation: The Cornell Experiment," in the Summer 2009 issue of the Journal of Economic Perspectives. As they write: "In the mid-1990s, Cornell University’s Faculty Senate had a number of discussions about grade inflation and what might be done about it. In April 1996, the Faculty Senate voted to adopt a new grade reporting policy which had two parts: 1) the publication of course median grades on the Internet; and 2) the reporting of course median grades in students’ transcripts. ...Curbing grade inflation was not explicitly stated as a goal of this policy. Instead, the stated rationale was that `students will get a more accurate idea of their performance, and they will be assured that users of the transcript will also have this knowledge.'"

For a sense of the effect of the policy, here are average grades at Cornell before and after the policy took effect. The policy doesn't seem to have held down average grades. Indeed, if you squint at the line a bit, it almost appears that grade inflation increased in the aftermath of the change.



What seems to have happened at Cornell is that when median grades for courses were publicly available, students took more of the courses where median grades where higher. They write: "Our analysis finds that the provision of grade information online induced students to select leniently graded courses—or in other words, to opt out of courses they would have selected absent considerations of grades. We also find that the tendency to select leniently graded courses was
weaker for high-ability students. Finally, our analysis demonstrates that a significant share of the acceleration in grade inflation since the policy was adopted can be attributed to this change in students’ course choice behavior."

The implication of these two studies is that if an institution wants to reduce grade inflation, it needs to do more than just make information about average grades available. Indeed, making information about average grades available seems to induce students, especially students of lower ability, to choose more easy-grading courses.  But as the Wellesley researchers point out, unilateral disarmament in the grading wars is a tricky step. In a world where grade point average is a quick and dirty statistic to summarize academic performance, any school that acts independently to reduce grade inflation may find that its students are on average receiving lower grades than their peers in the same departments at other  institutions--and that potential future employers and graduate schools may not spend any time in thinking about the reasons why.


Note: For the record, I should note that there are no comprehensive data on grades over time. The Rojstaczer and Healy data is based on their own research, and involves fewer schools in the past and a shift in the mix over time. They write:  "For the early part of the 1960s, there are 11–13 schools represented by our annual averages. By the early part of the 1970s, the data become more
plentiful, and 29–30 schools are averaged. Data quantity increases dramatically by the early 2000s with 82–83 schools included in our data set. Because our time series do not include the same schools every year, we smooth our annual estimates with a moving centered three-year average." Their current estimates include data from 135 schools, covering 1.5 million students. In the article, they compare their data to other available sources, and make a strong argument that their estimates are a fair representation.