Wednesday, January 19, 2011

All my biases confirmed: Education Issue

I'm sure all of you have heard about the new book, Academically Adrift. Using a large longitudinal study, 45% of students made no gains on the Collegiate Learning Assessment (CLA) during their first two years in college, and 36% made no gains over four years. That basically means colleges are not reaching a little over a third of our students (the ones who don't drop out) at all. But the study also points out exactly whom we are not reaching:
After controlling for demographics, parental education, SAT scores, and myriad other factors, students who were assigned more books to read and more papers to write learned more. Students who spent more hours studying alone learned more. Students taught by approachable faculty who enforced high expectations learned more. "What students do in higher education matters," the authors note. "But what faculty members do matters too." The study also found significant differences by field of study. Students majoring in the humanities, social sciences, hard sciences, and math—again, controlling for their background—did relatively well. Students majoring in business, education, and social work did not. Our future teachers aren't learning much in college, apparently, which goes a long way toward explaining why students arrive in college unprepared in the first place. Financial aid also matters. The study found that students whose financial aid came primarily in the form of grants learned more than those who were paying mostly with loans. Debt burdens can be psychological and temporal as well as financial, with students substituting work for education in order to manage their future obligations. Learning was also negatively correlated with­—surprise—time spent in fraternities and sororities.


Well, glad to know there is now hard data to back up all my biases. I am also with Tom, we need to stop cutting programs that teach critical thinking.

As far as the suggestions for federal government mandating loan money to schools that do well on the CLA, such as Kevin Carey (the first link) seem to be suggesting. From what I can tell from discussion with colleagues at schools that have implemented internal CLA testing in the past, the stats are pretty easy to juke. For example, you can open the initial assessment to the general population, but only test your students enrolled in your honors program in the senior year. You can require the test, offer no incentives (positive or negative) the first time, which will frequently produce students hurrying to get through it. And give strong incentives for the taking it the last time, creating a climate where students will take it more seriously. You can teach classes to the test. Indeed, you can make it common that professors provide in-class evaluations that are somewhat similar to the methodology of the test itself (questions in which students exam in-class pieces of evidence, and are able to explain and evaluate the arguments of those pieces of evidence). I'm not saying that the CLA is a bad metric. It can be, but as first blush, the way it was used here seems to have been honest and up-front. I am saying that the CLA is a bad metric for figuring out which schools should get money. Unless I am wrong (and I could be, I would love to hear from some readers with experience dealing with the CLA), it seems that it is pretty easily manipulable if schools have a strong financial incentive.