I can't resist commenting on the op-ed in today's Times by Richard Arum and Josipa Roksa, the authors of a recent higher ed best seller - "Academically Adrift." Their data show, based primarily on the Collegiate Learning Assessment (CLA), a widely used standardized test of general knowledge for college students, that at many colleges and universities, students aren't learning anything at all. They found, for instance, that nearly forty percent of the college students taking this test showed no gains whatsoever from their freshman to their senior years. An absolutely shocking conclusion that when you think about it defies logic.
While I am sure it is true that college students are not being challenged as they once were, it is a bit much to expect us to believe that after 4 years of education, including at some very fine colleges, significant numbers of students have learned nothing. Maturity alone would assure them of some increases in knowledge.
So let's say, for the sake of argument, that the data are deeply flawed, that the CLA is not getting a true measure of what students have learned, in part, because there is no incentive for them to do well on the test. There has been occasional commentary on the limits of this test and the way it is administered, but really not enough has been said about its inadequacies and what it aims to measure.
Here are the areas of critical thinking, data analysis and writing ability that the CLA assesses:
How well does the student assess the quality and relevance of evidence?
How well does the student analyze and synthesize data and information?
How well does the student form a conclusion from their [sic] analysis?
How well does the student consider other options and acknowledge that their [sic] answer is not the only perspective?
How clear and concise is the argument?
How effective is the structure?
How well does the student defend the argument?
What is the quality of the student’s writing?
How well does the student maintain the reader’s interest?
The authors of the CLA are very proud of the fact that this test measures general knowledge, the sort of knowledge that so many colleges profess to be concerned with. But that has not been my experience. Most colleges, and certainly most college teachers, want students to acquire knowledge in a single academic discipline more than they want them to be proficient in the realm of general knowledge. The reason for this is obvious. That is what professors value, that is how they themselves were trained.
So when authors like Arum and Roksa tell us about the problems of higher education - too many adjuncts, not enough rigor, overreliance on student evaluations, etc. - we should take them seriously while also remembering there is another reason embedded deeply in the culture of higher education. It has everything to do with our love of the specialist and our contempt for the generalist. Future studies should spend far more time on what higher education does best: prepare students to think, analyze, and write in their academic major. Now, whether that's a good goal or not for higher education is a different question entirely. But, to be fair, that is by far what colleges have come to care about most and should be given its due.