Category: Research connected to practice

Walking the STEM Path III: Data on Intermediate Algebra

I have been getting ready for a presentation at AMATYC on the ‘bridge to somewhere’ … Algebraic Literacy.  A recent post described how to identify Algebraic Literacy, compared to Intermediate Algebra.  This post will look at some nice research on how effective intermediate algebra is, relative to preparing students for the typical kind of course to follow … college algebra, or pre-calculus.  #bridgesomewhere #AlgebraicLit #DevMath

ACT routinely does research on issues related to higher education.  In 2013, ACT published one called “A Study of the Effectiveness
of Developmental Courses for Improving Success in College” (see http://www.act.org/research/researchers/reports/pdf/ACT_RR2013-1.pdf  )  The data comes from 75 different institutions, representing well over 100000 students.  I was very interested in their results relating to intermediate algebra and college algebra.

Their methodology involves calculating the conditional probability of passing college algebra, using the ACT Math score as the input variable; this was done for two groups … those who took intermediate algebra and those who did not take intermediate algebra.   Their work involved a cutoff score of 19 for placing into college algebra (which seems low, but I trust that it was true).  Due to waivers and institutional flexibility, they had enough results below the cutoff to calculate the conditional probabilities for both groups; above the cutoff, only enough data was there for the group not taking intermediate algebra.

As an example, for ACT math score of 18: the probability of passing college algebra was .64 for those without intermediate algebra … .66 for those with intermediate algebra.  For that score, taking intermediate algebra resulted in a 2 percentage point gain in the probability of passing college algebra.  The report also calculates the probability of getting a B or better in college algebra for the two groups (as opposed to C).

Here is the overall graph:

ACT intermed alg vs college alg aug2015

 

 

 

 

 

 

 

 

 

 

 

The upper set (blue) shows the probability of passing (C or better) with the dashed line representing those who did the developmental course (intermediate algebra).  For all scores (14 to 18) the gap between the dashed & solid lines is 5 percentage points … or less.  In other words, the effectiveness of the intermediate algebra course approaches the trivial level.

The report further breaks down this data by the grade the student received in intermediate algebra; the results are not what we would like.   Receiving a C grade in intermediate algebra produces a DECREASED probability of passing college algebra (compared to not taking intermediate algebra at all).  Only those receiving an A in intermediate algebra have an increased probability of passing college algebra. [Getting a “B” is a null result … no gain.]

Our intermediate algebra course is both artificially too difficult and disconnected from a good preparation.  That’s what I will be talking about at the New Orleans AMATYC conference.

The results for intermediate algebra echo what the MAA calculus project found for college algebra/pre-calculus:  ‘below average’ students have a decreased probability of passing calculus after taking the prerequisite (when accounting for other factors).

Our current STEM path (intermediate algebra –> college algebra –> calculus) is a bramble patch.  The courses do not work, because we never did a deliberate design for any of them.  Intermediate algebra is a descendant of high school algebra II, and college algebra is a descendant of an old university course for non-math majors.

Fortunately, we have sufficient information about the needs of the STEM path to do better.  The content of the Algebraic Literacy course is engineered to meet the needs of a STEM path (as well as other needs).

 Join Dev Math Revival on Facebook:

Meaningful Mathematics … Learning Mathematics

Among the pushes from policy makers is ‘meaningful math’ … make it applicable to student interests.  Of course, this is not a new idea for us in mathematics; we’ve been using ‘real world applications’ to guide some reform efforts for decades. #realmath #collegemath #learningmath

Some current work in reforming mathematics education in college is based on a heavy use of context, where every new idea is introduced with a situation that students can understand.  We know that appropriate contexts with meaning to students helps their motivation; does it help their learning?

Before I share what I know of the theory and research on these approaches, I would like you to envision two types of textbooks or instructional materials (print, online, or whatever).

  1. Opening up the start of a lesson or section, repeatedly and randomly, generates a short verbal introduction followed by formal mathematical symbolism related to the new idea(s) in almost all cases.
  2. Opening up the start of a lesson or section, repeatedly and randomly, generates a variety of contextual situations and mention of a mathematical idea or tool that might be used.

Many traditional textbooks are type (1), while a lot of current reform textbooks are type (2).  Much of the change to (2) is based on instructor preferences, and I am guessing that much of the resistance to change is based on instructor preferences for (1).

Let’s take as a given that contexts that are accessible to students improve their motivation, and that we have a goal to improve student motivation; further, let us assume that we share a goal of having students learn mathematics (though that phrase means different things to different people).  There seem to be a number of questions to answer dealing with how a context-intensive course impacts student learning of mathematics.

  • How uniform is the impact of context on different learners?  (Is there an “ADA-type” issue?)
  • Do students learn both the context and the mathematics?
  • Is learning from a context more or less likely to be used and transferred to new situations?

I know the answer to the first question, based on research and experience: The impact is not uniform.  You probably understand that there are language issues for quite a few students, perhaps based on a class taught in English when the primary language is something else.  However, most of the contexts have a strong cultural factor.  For example, a common context for mathematics work is “the car”; there are local cultures where cars are not a personal possession, as well as cultures outside the USA where cars are either generally absent or relatively new (and, therefore, people know little).  The cultural problems can be overcome with sufficient scaffolding; is that how we want to spend time … does it limit the mathematical learning?  There are also ADA concerns with context: a sizable group of students have difficulties processing elements of ‘a story’ … leading to problems unpacking the context into the quantitative components we think are ‘obvious’.

The second question deals with how the human brain processes different types of information.  A context is a type of narrative, a story; stories activate isolated memories and create isolated memories.  To understand that, think about this context:

You are standing on a corner, and notice a car approaching the intersection.  When the light turns red, the car applies the brakes so that it stops in about 2 seconds.  You estimate that the distance during the stopping process is about 100 feet, and the speed limit is 30 miles per hour.  Let’s look at the rate of change in speed, assuming that this rate is constant through the 2 second interval.

The technical name for a story in memory is “episodic memory”.  This particular story might not activate any episodic memories for a given student; that depends on the episodes they have stored and the sensory activators that trigger recall.  Some students will respond strongly and negatively to a particular story, and this does not have to depend upon a prior trauma.  More of a concern are students who have some level of survival struggle (food, shelter, etc); many contexts will activate a survival mode, thereby severely limiting the learning.  Take a look at a report I wrote on ‘stories’ http://jackrotman.devmathrevival.net/sabbatical2006/2%20Here%27s%20a%20story,%20Ignore%20the%20Story.pdf

What happens to the mathematics accompanying the story?  If we never go past the episodic memory stage, the mathematics learned is not connected to other mathematics; it’s still a story.  In the ‘car stopping at an intersection’ story, the human brain might store the rate of change concepts with the rest of this specific story, instead of disassociating the knowledge so it can be used either in general or in new ‘stories’ (context).  Disassociating knowledge is another learning step; many context-based materials ignore this process, and that results in my biggest concern about ‘problem based learning’.

Using knowledge (Transfer … question 3) depends upon the brain receiving sensory input that activates the knowledge.  This is the fundamental problem with learning mathematics … our students do not see the same signals we see, ones that activate the appropriate information.  For this purpose, traditional symbolic forms and contextual forms have the same magnitude of difficulty: the building up of appropriate triggers to use information, as well as creating chunks of information that work together.  We need to be willing to “teach less mathematics” so that we can focus more on “becoming more like an expert with what we know”.  For more information on learning mathematics based on theoretical (and research-based) points of view, see http://jackrotman.devmathrevival.net/sabbatical2006/9%20Situated%20Learning.pdf and http://jackrotman.devmathrevival.net/sabbatical2006/6%20Learning%20Theories%20Overview.pdf 

I can’t leave this post without mentioning a companion issue:  Contextual learning is often done by ‘discovery’.  Some reform materials have an extreme aversion to ‘telling’, while traditional materials have an extreme aversion to ‘playing around’.  From what I know of learning (theory and research), I think it is safer to take the traditional approach … telling does not provide the best learning, but relying on discovery often results in even more incomplete and/or erroneous learning.  Just for fun, take  a look at http://jackrotman.devmathrevival.net/sabbatical2006/8%20Telling,%20Explaining,%20and%20Learning.pdf

Looking for a brief summary of all of this stuff?

Contextualized learning comes with significant risks; use it with caution and a plan to overcome those risks.
Mathematics is, by its nature, a practical field; all math courses need to have significant context used in the process of learning mathematics.

These ideas about context are related to the efforts of foundations and policy influencing agents (like CCA).  We need to keep the responsibility for appropriate instruction … including context.

Join Dev Math Revival on Facebook:

Co-requisite Remediation: When it’s likely to work

A recent post dealt with the “CCA” (Complete College America) obsession with ‘corequisite remediation’.  In case you are not familiar with what the method involves, here is my synopsis:

Co-requisite remediation involves the student enrolling in both a credit course and a course that provides remediation, concurrently.  The method could be called ‘simultaneous’ remediation, since students are dealing with both the credit course and the remedial course concurrently.

The co-requisite models are a reaction to the sequential remediation done in the traditional models.  For mathematics, some colleges have from two to five remedial courses in front of the typical college course (college algebra, pre-calculus, or similar level).  The logic of exponential attrition points out the flaws in a long sequence (see https://www.devmathrevival.net/?p=1685 for a story on that).

The co-requisite models in use vary in the details, especially in terms of the degree of effort in remediation … some involve 1 credit (1 hour per week) in remedial work, others do more.  Some models involve adding this class time to the course by creating special sections that meet 5 or 6 hours per week instead of 4.

I do not have a basic disagreement with the idea of co-requisite remediation.  Our work in the New Life Project included these ideas from the start; we called it ‘just-in-time remediation’; this emphasis resulted in us not including any course before the Mathematical Literacy course.

The problem is the presumption that co-requisite remediation can serve all or almost all students.  For open-door institutions such as community colleges, we are entrusted with the goal of supporting upward mobility for people who might otherwise be blocked … including the portion needing remediation.  The issue is this:

For what levels of ‘remediation need’ is the co-requisite model appropriate?

No research exists on this question, nor am I aware of anybody working on it.  The CCA work, like “NCAT” (National Center for Academic Transformation) does not generally conduct research on their models.  NCAT actually did some, though the authors tended to be NCAT employees.  The CCA is taking anecdotal information about a new method and distributing it as ‘evidence’ that something works; I see that as a very dangerous tool, which we must resist.

However, there is no doubt that co-requisite remediation has the potential to be a very effective solution for some students in some situations.  Here is my attempt at defining the work space for the research question:  Which students benefit from co-requisite remediation?

Matching students to remediation model:

Matching students to remediation model

 

 

Here is the same information as text (in case you can’t read the image above):

Of prerequisite material ↓ Never learned it Misunderstands it Forgotten it
Small portion5% to 25% Co-requisite model Co-requisite model Co-requisite model
Medium portion30% to 60% Remedial course Remedial course Co-requisite model
Large portion65% to 100% Remedial course(s) Remedial Course(s) Remedial course

The 3 by 3 grid is the problem space; within each, I have placed my hypotheses about the best remediation model (with the goal of minimizing the number of remedial courses for each student).

As you probably know, advocates like CCA have been very effective … some states have adopted policies that force extensive use of co-requisite remediation “based on the data”.  Of course the data shows positive outcomes; that happens with almost all reasonably good ideas, just because there is a good chance of the right students being there, and because of the halo and placebo factors.

What we need is some direct research on whether co-requisite remediation works for each type of student (like the 9 types I describe above).  We need science to guide our work, not politics directing it.

 
Join Dev Math Revival on Facebook:

 

Data – Statistics – The CCA Mythologies (or Lies?)

Decision makers (and policy makers) are being heavily influenced by some groups with an agenda.  One of those groups (Complete College America, or CCA) states goals which most of us would support … higher rates of completion, a better educated workforce, etc.

A recent blog post by the CCA was entitled “The results are in. Corequisite remediation works.” [See http://completecollege.org/the-results-are-in-corequisite-remediation-works/]  CCA sponsored a convening in Minneapolis on corequisite remediation, where presenters shared data on their results.  Curiously, all of the presenters cited in the blog post had positive ‘results’ for their co-requisite model; clearly, this means that the idea works!  [Or, perhaps not.]

The textbook we use for our Quantitative Reasoning course includes a description of ‘results’ that were very (VERY) wrong.  A survey was done using phone number lists and country club membership lists, back in the 1930s when few people had telephones.  The survey showed a clear majority favoring one candidate, and they had over 2 million responses.  When the actual election was held, the other candidate one by a larger margin than was predicted in the survey.

The lesson from that story is “we need a representative sample” before using the data in any way.  The CCA results fail this standard badly; their ‘results’ are anecdotes.  These anecdotes do, in fact, suggest a connection between co-requisite approaches and better student outcomes.  College leaders, including my own provost, report that they “have seen the numbers’ showing that this method is promising.

Based on that exposure to questionable data, we are running an experiment this summer … students assessed at the basic arithmetic level are taking an intermediate algebra course, with extended class time and a supplemental instruction leader.  Currently, we still allow students to use an intermediate algebra course to meet a general education requirement, so this experiment has the potential to save those 11 students two (or more) semesters of mathematics.  I have not heard how this experiment is going.

Curiously, data on other ‘results’ would not be used to justify a change by decision makers and policy experts.  People have used data to support some extreme and wrong notions in the past, and are still doing so today.  The difference with the CCA is that they have some methodologies that automatically achieve validity, with the co-requisite remediation models at the top of this list.

Scientists and statisticians would never reach any conclusions based on one set of data.  We develop hypotheses, we build on prior research, we refine theories … all with the goal of better understanding the mechanisms involved in a situation.  Co-requisite remediation might very well work great — but not for all students, because very little ‘works’ for all students.  The question is which students, which situations, and what conditions will result in the benefits we seek.

The CCA claim that “corequisite remediation works” is a lie, communicated with a handful of numbers which support the claim, and this lie is then repeated & repeated again as a propaganda method to advance an agenda.  We need to be aware of this set of lies, and do our best to help decision makers and policy experts understand the scientific method to solving problems.

Instead of using absolute statements, which will not be true but are convenient, we need to remain true to the calling of helping ALL students succeed.  Some students’ needs can be met by concurrent remediation; we need to understand which students and find tools to match the treatment to them.  Our professionalism demands that we resist absolutes, point out lies, and engage in the challenging process of using statistics to improve what we do.

 Join Dev Math Revival on Facebook:

WordPress Themes