The Majority of Community Colleges Use Multiple Measures, or “Using Statistics to Say Nothing Impressively”
My college has a team working on implementing multiple measures placement for our credit math courses. We are early in a process, so we are primarily collecting information. One of my colleagues found an organization with both internal resources and links to external resources. One of those external resources led me to a “CAPR” report (more on the acronym in a moment) with a good example of bad statistics.
So, here’s the question:
What proportion of American community colleges (defined as associate degree granting institutions) use multiple measures to place students in mathematics?
A place with an ‘answer’ to this good question is the Center for the Analysis of Postsecondary Readiness (CAPR), with a report “Early Findings from a National
Survey of Developmental Education Practices” (see https://postsecondaryreadiness.org/wp-content/uploads/2018/02/early-findings-national-survey-developmental-education.pdf ). Using data from two national surveys, this report shows the graph below:
So, what is the probability of a given community college using multiple measures placement? It’s not 57%, that’s for sure. In general, this work is being done by states. If your community college is in California, the probability of using multiple measures is pretty much 100%. On the other hand, if your community college is in Michigan, the probability is somewhere around 5% to 10%. Is the probability rising over time?
Here is what the report says to ‘interpret’ the graph:
This argument [other indicators of college readiness provide a more accurate measure ofcollege success] has gained traction in recent years among public two-year institutions. In a 2011 survey, all public two-year institutions reported using a standardized mathematics test to place students into college-level math courses; as shown in Figure 1, only 27 percent reported using at least one other criterion, such as high school grade point average or other high school outcomes. Just five years later, 57 percent of public two-year institutions reported using multiple measures for math placement.
Clearly, the implication is that community colleges are choosing to join this ‘movement’. Of course, some community colleges are making that choice (as mine is doing). However, a large portion of that 57% in 2016 reflects states with mandated multiple measures (California, North Carolina, Georgia, Florida, probably others). The data has no particular meaning in any location or college in the country. A movement could be measured in states without a mandate.
Essentially, the authors are using statistics to say absolutely nothing in an impressive manner. Multiple measures is clearly a ‘good thing’ because more colleges are doing it, so the logic goes. Unfortunately, the data does not mean anything like that — multiple measures are most commonly imposed by non-experts who have the authority to mandate a policy. [By ‘non-expert’, I mean people whose profession does not involve the actual work of getting students correctly placed in mathematics … politicians, chancellors, etc.]
Join Dev Math Revival on Facebook: