Understanding the Data on Co-Requisite Remediation
We need to change how we handle remediation at the college level, because the traditional system is based on weak premises … and the most common implementations are designed to fail for most students. Where we have had 3 and even four remedial courses, we need to look at one for most students.
Because of that baseline, the fanatical supporters of “co-requisite remediation” are having a very easy time selling their concepts to policy makers and institutional leaders. The Complete College America (CCA) website has an interactive report on this (http://completecollege.org/spanningthedivide/#remediation-as-a-corequisite-not-a-prerequisite ) where you can see “22%” as the national norm for the rate of students starting in remediation who complete a college math course. With that is a list of 4 states who have done co-requisite models … all of whom show 61% to 64%.
One obvious problem with the communication at the CCA site is that the original data sources are well hidden. Where does the ‘22%’ value come from? Is this all remediation relative to all college math? The co-requisite structures almost always focus on non-algebraic math courses (statistics, quantitative reasoning). One could argue that this issue is relatively trivial in the discussion; more on this later.
What is non-trivial is the source of the “61% to 64%”.
One of the community colleges from a co-requisite remediation state came to our campus and shared their detailed data … which makes it possible to explore what the data actually means. Here are their actual success rates in the co-requisite model they are using:
Math for Liberal Arts: 52%
Statistics: 41%
These are pass rates for students in both the college math course and the remediation course in the same semester. Another point in this data is that ‘success’ is considered to be a D or better.
For comparison, here are similar results from a college using prerequisite remediation, showing the rate of completing the college math course for those placing at the beginning algebra level.
Quantitative Reasoning: 53%
Statistics: 52%
In other words, if 100 students placed at the beginning algebra level in the fall … there were about 52 who passed their college math course in the spring. Furthermore, this college considers ‘success’ to be a 2.0 or better. The prerequisite model here has higher standards and equal (or higher) results.
The problem with the data on co-requisite remediation is that only high-level summaries (aggregations) are shared. Maybe the state average for the visiting college really is “61%” when they have about 45% (they have more in statistics than Liberal Arts). Or, perhaps the data is being summarized for all students in the college course without separating those in the co-requisite course. One hopes that the supporters are being honest and ethical in their communication.
I suspect that the skewing of the data comes more from the “22%”. The source for this number usually includes all levels of remediation followed to any college math course (including pre-calculus). The co-requisite data is a different measurement because the college course is limited (statistics, quantitative reasoning).
Another interesting thing about the data that was shared from the co-requisite remediation college is this statement:
Only about 20 students out of 1500 in co-requisite remediation had an ACT Math score at 15 or below.
At my institution, about 20% of our students have ACT Math scores at 15 or below. Nationally, the ACT Math score of 15 is at the 15th percentile. Why does one institution have about 1% in this range? Is co-requisite remediation being used to create selective admission community colleges? [Not by policy, obviously … but due to coincidental impacts of the co-requisite system.]
Sometimes I hear the phrase “a more nuanced understanding” relative to current issues in mathematics education. I suppose that would be nice. First, though, we need to start with a shared basic understanding. We can not have that basic understanding as long as the data being thrown at us is ill-defined aggregate results lacking basic statistical validity.
Perhaps the co-requisite remediation data has statistical validity. I tend to doubt that, as we use a peer review process to judge statistical validity … we we know that has not been the case for the co-requisite remediation data we are generally seeing (especially from the CCA). The quality of their data is so bad that there would be a failing grade in most introductory statistics courses for a student doing that quality of work. It’s discouraging to see policy leaders and administrators become profoundly swayed by statistics of such low quality.
Reducing ‘remediation’ to one measure is an obviously bad idea.
Join Dev Math Revival on Facebook:
No Comments
No comments yet.
RSS feed for comments on this post. TrackBack URI
Leave a comment
You must be logged in to post a comment.