The latest book to take a critical look at American Universities is Academically Adrift: Limited Learning on College Campuses by Richard Arum and Josipa Roksa, University of Chicago Press, 2011. In addition to reading the book, I attended a lecture when the Smith School invited Professor Roksa to present her findings to our faculty.
The authors looked at a large group of college students, following them over time. Much of their data came from the administration of the College Learning Assessment (CLA) which attempts to measure critical thinking skills rather than factual knowledge. This assessment is very much in tune with the teaching in business schools where we encourage students to think about issues and engage in problem solving. We also make extensive use of case studies, and the CLA test presents a problem to the student in the form of a short story with a problem to be solved like the longer cases in our classes.
I won’t summarize the book here, but the general tone is that there is less learning going on in colleges since the 1960s. Professor Roksa mentioned one finding that highlights this decline; a number of “B” students reported that they needed to study only an hour a night. Of course, there was wide variance, the students in selective colleges are studying more. And college is about more than learning facts or performing well on a single test. But I think a lot of us feel that the study has provided evidence that the rigor of college education in general has been declining in the U.S. (with the exception of some of our top universities?)
There are undoubtedly many reasons for this decline, and everyone has their favorites ranging from the quality of K-12 education to television and the Internet distracting students starting at an early age.
Thinking about my own career I can contribute two causes to the list of reasons for the findings in Academically Adrift. When I was an undergraduate, I was never asked to evaluate a class or a professor or to complete a survey about teaching. When I went to graduate school the story was the same. I asked our sons who graduate from college in the 90s and by that time things had changed; at the same undergraduate school I attended they completed course evaluations for their classes.
Because it is easy, we tend to choose one or two numbers from course evaluations, often the average of several items, and use them as indicators of faculty teaching quality. This simple measure and its importance motivates a lot of behavior which may not be the best for educational outcomes. For example, an instructor has to be careful not to demand too much work because if the workload is too far above average students will be critical on the evaluations. A faculty member has to worry about how entertaining she is as well as the material being presented. And grading too harshly will also elicit a poor response from students on the course evaluation survey. I suspect that the demands and rigor of our courses for many of us who have taught for several decades have declined over time.
The other trend that might impact academic drift is the pressure from research universities to publish research in “A” journals. This process has become more and more difficult as the journals have become more demanding. In my own field of Information Systems, the peer reviewers spend most of their time looking for reasons to reject a paper rather than figuring out how to help the author improve it. Time devoted to research has to come from someplace; if that is from committee work, no problem. But if the time comes from teaching, then students and education will suffer.
Is there a solution that mitigates these two problems? One would be to change the course evaluation process to try and measure learning rather than the entertainment value of the instructor. Deans could have faculty members known for the rigor of their courses evaluate course syllabi and faculty innovation.
Solving the research problem is more difficult because research universities obviously have to emphasize research. One strategy would be to allow a faculty member post tenure to change the weights given to research and teaching in her annual evaluation. Many universities have teaching only faculty who are not on the tenure track. A school could increase the status of these positions and their remuneration so that a teaching position was seen as being as valuable as a faculty position that involves research. Some schools have research associates who work only on research and have no teaching responsibilities. Adding these positions to the mix would provide more flexibility for faculty who want to emphasize their teaching.
When you are adrift on a sailboat, the captain can raise the sails or start the engine and set a direction. We need the captains of academia to choose a new heading and get U.S. universities back on course.