Completion Agenda and Change in Mathematics Education

Today, I am at a state conference on Student Success, and there is the usual conversation about the ‘completion agenda’ with a high focus on mathematics in community colleges.  We can not hide from the completion agenda, so perhaps we should understand better how we can use this opportunity to achieve some of our basic goals.

As you probably know, the completion agenda is primarily being driven by philanthropy working through foundations and grants.  We share a goal with these stakeholders — getting more (many more) students to the achievement of their academic goals as well as employment.  However, we have some tensions and areas of disagreement.

The completion agenda and its members have released reports about developmental education — and developmental mathematics in particular.  Some reports are research studies focused on analysis of data with a slight bias towards interpreting data from the completion standpoint.  A few reports have been dramatic dismissals of any values in developmental courses.

Like many of us, I have a strong reaction to the dismissive reports.  I want to remember the audience that those reports were written for — the high-level policy makers who need to support changes.  Perhaps the actors in the completion agenda believe that something really strong needs to be used to get their attention.  Perhaps it’s an unintended consequence of these dismissive reports is that they get quoted and cited by popular media to the point that a broad spectrum of people believe the conclusions.  Our best approach might be to smile and nod — recognize the reports, smile, and don’t argue about the conclusions.

Maybe we can look at the situation this way:  Pressures are being applied in order to create change, and the forces are now strong enough that “not changing” is not an option (if we wanted to).  The foundations and funding process tend to reward certain approaches more than others — in particular, integration of technology in some way.  We do not have to agree that these are the best approaches. And, because of the forces on our profession at this time, it is relatively easy to implement our own ideas of a better solution (or solutions).

I’m reminded of a story.  One person in a community is seen suspiciously by a few, so an investigation is begun.  The investigation uncovers some prior falsehoods by that person (minor ones at that), but no direct evidence.  However, the investigation continues.  Friends are questioned about the loyalty of the person.  Implications are made, even though no evidence is found to support them.  These implications are repeated by a small but active group over a period of days … until the community comes to believe that the person is a traitor.  In fact, the person is loyal — the falsehoods involved statements on forms.  The repetition of statements becomes ‘truth’.

We will see reports saying that “acceleration works well”, so developmental math is limited to one semester.  We see reports that “placement tests don’t work”, so all students are placed into college-level courses.  Seeing initial evidence of positive results does nothing to prove the validity of a methodology.  The scientific method is not being used by the completion agenda, even though the process is ‘data driven’.  The completion agenda works more like a business plan than science.

Our role is to keep the science in mathematics education.  “What works” needs to be understood within the context of the work (its purposes) and needs to make sense with other knowledge (such as cognitive psychology).    Piloting a methodology does not usually create any proof that we have a sound solution.

Let’s articulate what WE see as the problem being solved.  For me, it’s not about completion directly — it’s more about mathematics (each course having good mathematics) and more about not wasting students’ time.  Our view of the problem will only be heard if way communicate it.

Let’s gather and share data on our pilot programs whenever possible.  Just as importantly, when something good happens, we need to keep repeating the report of those results until more pilots are done … and until we understand how the pilot works (or not).  [The completion agenda has forgotten the value of experiments that fail.]

We can not match the communication capabilities of foundations, though we can come close.  The people working on ‘completion’ want to improve education, but they need the profession to be involved with solutions.  We can keep all voices in the conversation.  We can provide additional results that might help explain patterns in data.  Collectively, we need to understand the problem and why a ‘solution’ works in order to bring progress to our profession.

We each have a role to play.  What’s yours?

 Join Dev Math Revival on Facebook:

No Comments

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

You must be logged in to post a comment.

WordPress Themes