The Rigor Unicorn

How would you define (or describe) “rigor” in college mathematics classes?  Can you define or describe “rigor” without using the words “difficult” or “challenging”?  I will share a recent definition, and counter with my own definition.

Before anything else, however, we need to recognize the lack of equivalence between rigor and difficult (and between rigor and challenging).  The basic problem with those concepts (difficult, challenging) is that they are relational — a specific thing is difficult or challenging based on how that thing interacts with a person or a group of people.  Difficult and challenging are relative concepts, not a property of the object being described.  I found the calculus of trig functions to be difficult, not because there is any rigor involved — it was difficult for me because of the heavy role of memorization of formulae in that work in the particular class with that particular professor.  Other learners find this same work easy.

A recent definition of “rigor” comes from the Dana Center (DC):

We conclude that rigor in mathematics is a set of skills that centers on the communication and use of mathematical language. Specifically, students must be able to communicate their ideas and reasoning with clarity and precision by using the appropriate mathematical symbols and terminology.

See http://dcmathpathways.org/resources/what-is-rigor-in-mathematics-really http://dcmathpathways.org/resources/what-is-rigor-in-mathematics-really

This definition avoids both ‘banned’ words (difficult, challenging), and that is good.  This definition focuses on communication of ideas and reasoning, and that seems good.  When my department discussed this definition recently at a meeting, the question was naturally raised:

Does rigor exist in the absence of communication?

The problem I have with the “DC” definition of rigor is that it suggests that rigor only exists when there is communication taking place.  In other words, rigor does not describe the learning taking place … rigor describes the communication about that learning.  Obviously, communication about mathematics is critical to all levels of learning — whether there is lots of rigor or none.  I don’t think we can equate rigor with communication.  Such an equivalence tempts us to equate rigor with how we measure rigor.

As I think about rigor, I always return to concepts relating to the strength of the learning.  I’d rather have an equivalence between rigor and strength, as that makes conceptual sense.  The rigor exists even in the absence of communication.  Rigor describes the concepts and understanding being developed within the learner, not the object being learned.

My definition:

Rigor in mathematics refers to the accuracy and strength of learning, and specifically to the completeness of the cognitive schema within the learner including appropriate connections between related or dependent ideas.

In some ways, this definition of rigor suggests that “rigor” and “like an expert” might be equivalent concepts.  I am suggesting that rigor describes the quality of learning compared to complete and perfect learning.  Rigor is not an on-off switch, rather it is a continuum of striving towards the state of being an expert about a set of concepts.

One of the reasons I approach the definition ‘differently’ is that rigor should exist in appropriate ways in every math course — from remedial through basic college through calculus an up to graduate level and research work.  Rigor is not a destination, where we can declare “this student has rigor”.  Rigor is a quality comparison between the unseen learning and the state of an expert in that particular set of content.  When we teach basic linear functions, I seek to develop rigor in which students have qualities of learning like an expert would have, concerning connections and reasoning.  When we teach calculus of trig functions, I hope we seek to develop qualities of learning similar to an expert.

I believe the development of rigor is a fundamental ingredient to making mathematics innately easier for the learner.  When the knowledge is more complete (like an expert) the use of that knowledge becomes more efficient … and the learning of further mathematics requires less energy (just like an expert).  Rigor is the core ingredient in the recipe to make mathematicians from the students who arrive in our classrooms.

Rigor does not start in college algebra, nor in calculus.  Rigor is not the same as ‘difficult’.  Rigor can exist when there is no communication about the learning.  Rigor is the fundamental goal of all learning, at all levels … rigor is a way to measure the quality of learning.  Rigor is the goal of developmental mathematics … the goal of quantitative reasoning … the goal of pre-calculus … the goal of calculus … the goal of statistics.

The “rigor unicorn” is within each of us, and within each of our students.

 

Can We Even Say “Developmental” Anymore?

Some of us say “remedial mathematics”, others say “developmental mathematics”.  Do you feel like you can’t say either one now?

You may have heard that “NADE” changed its name from National Association for Developmental Education to “NOSS” … National Organization for Student Success.  You can understand why this was done, with the recent attacks on all things developmental.  Being understandable, however, does not make this type of thing “right”.  As far as NADE/NOSS is concerned, I think the name change will make it difficult for the organization to articulate a clear identity … since ‘student success’ is an over-arching concept, suggesting that the group will focus on the universe of higher education.  Who will speak on the behalf of students who need advocates for over-coming weak preparation?

Clearly, this avoidance of the word developmental is a systemic problem — a symptom of massive denial — a denial being offered as a “solution”.  Obviously, remedial education (aka developmental education) has had significant problems in the past with our focus on too-many courses, and not providing enough benefit.  However, multiple measures and co-requisite courses will also be a failure in coping with the gaps in preparation that our students bring to us.  We could debate whether a high-school graduate SHOULD need coursework in college before being able to succeed in mathematics; ‘should’ is a very weak design principle for an educational system.  We must succeed in the real world.  Why should we penalize students by pretending that we have some magic that will somehow enable students with an SAT Math of 420 to succeed in a college curriculum with only added support to their ‘college math’ course?

If leaders don’t want to use labels like ‘developmental’, I encourage them to use the new replacement phrase “black magic”.  It would take some serious black magic to help students succeed in their college program with serious deficiencies in mathematics without doing some direct (prolonged) work on the problem.  In some cases, what is being done to avoid developmental math courses comes across as smoke & mirrors.  People implement grand plans, which (according to them) produce great results for all kinds of students.  Sign them up for “America’s Got Talent!” 🙂

I think we are better off using an accurate word like “remedial” and then have an honest discussion about identifying students who need one or two courses in order to be ready for success in their college program.  We need to think more about the whole college program, and less about passing a particular ‘college’ math course.  The opportunity for second chances and upward mobility are at the center of a stable democracy.

Language is important.  Not using a word (like “developmental”) does not solve the set of problems we face.  There is no magic in education; progress is made by applying deep understanding and critical thinking across a broad community committed to helping ALL students achieve their dreams.

 

Does the HS GPA Mean Anything?

In the context of succeeding in college-level mathematics, does the HS GPA have any meaning?  Specifically, does a high school grade point average over some arbitrary value (such as 3.2) indicate that a given student can succeed in college algebra or quantitative reasoning with appropriate support?

Statistically, the HS GPA should not exist.  The reason is the the original measures (semester grades on a scale from 0 to 4 with 0.5 increments) is an ordinal measure; higher values are greater than smaller values.  A mean of a measure depends upon a presumption of “interval measures” — the difference between 0 and 1 is the same as the difference between 3 and 4.  The existence of GPA (whether HS or college) is based on convenient ignorance of statistics.

Given the long-standing existence of the HS GPA, one can not hope for leaders to suddenly recognize the basic error.  Therefore, let’s assume that the HS GPA is a statistically valid measure of SOMETHING.  What is that something?  Is there a connection between that something and readiness for college mathematics?

The structure of the data used for the HS GPA varies considerably by region and state.  In some areas, the HS GPA is the mean of 48 values … 6 courses at 2 semesters per year for 4 years.  If the school schedule allows for 7 classes, then there are 56 values; that type of variation is probably not very significant for our discussion.  The meaning of the HS GPA is more impacted by the nature of the 6 (or 7) classes each semester.  How many of these courses are mathematical in nature?  In most situations, at the current time, we might see 8 of the 48 grades coming from a mathematics course with another 4 to 8 coming from a science course.  Although most students take “algebra II” in high school, a smaller portion take a mathematically intense science course (such as physics).

In other words, we have a measure which has approximately a 20% basis in mathematics alone.  The other 80% represent “english”, social science, foreign language, technology, and various electives.  Would we expect this “20% weighting” to produce useful connections between HS GPA and readiness for college mathematics?  If these connections exist, we should see meaningful relationships between HS GPA and accepted measures of readiness.

So, I have spent some time looking at our local data.  We have only been collecting HS GPA data for a short time (less than one year), and this data can be compared to other measures.  Here are the correlation coefficients for the sample (n>600 for all combinations):

  • HS GPA with SAT Math: r = 0.377
  • HS GPA with Accuplacer College Level Math: r = 0.164
  • HS GPA with Accuplacer Algebra:   r = 0.338

Compare this with the correlations of the math measures:

  • SAT Math with Accuplacer College Level Math: r = 0.560
  • SAT Math with Accuplacer Algebra: r = 0.627
  • Accuplacer College Level Math with Accuplacer Algebra: r = 0.526

Of course, correlation coefficients are crude measures of association.  In some cases, the measures can have a useful association.  Here is a scatterplot of SAT Math by HS GPA:

 

 

 

 

 

 

 

 

 

The horizontal lines represent our cut scores for college level mathematics (550 for college algebra, 520 for quantitative reasoning). As you can see from this graph, the HS GPA is a very poor predictor of SAT Math.  We have, of course, examined the validity of the traditional measures of readiness for our college math courses.  The overall ranking, starting with the most valid, is:

  1. Accuplacer Algebra
  2. Accuplacer College Level Math
  3. SAT Math

The order of the first two differs whether the context is college algebra or quantitative reasoning.  In all cases, the measures show consistent validity to promote student success.

Here is a display of related data, this time relative to ACT Math and HS GPA.  The curves represent the probability of passing college algebra for scores on SAT Math.

 

 

 

 

 

 

 

 

 

[Source:  http://www.act.org/content/act/en/research/act-scores-and-high-school-gpa-for-predicting-success-at-community-colleges.html ]

For math, this graph is saying that basing a student’s chance of success just on the HS GPA is a very risky proposition.  Even at the extreme (a 4.0 HS GPA), the probability of passing college algebra ranges from about 20% to about 80%.  The ACT Math score, by itself, is a better predictor. The data suggests, in fact, that the use of the HS GPA should be limited to predicting who is not going to pass college algebra in spite of their ACT Math score … ACT Math 25 with HS GPA below 3.0 means “needs more support”.

So, back to the basic question: What does the HS GPA mean? Well, if one ignores the statistical violation, the HS GPA has considerable meaning — just not for mathematics.  The HS GPA has long been used as the primary predictor of “first year success in college” (often measured by 1st year GPA, another mistake).  Clearly, there is an element of “academic maturity or lack thereof” in the HS GPA measure.  The GPA below 3.0 seems to indicate insufficient academic maturity to succeed in a traditional college algebra course (see the graph above).

We know that mathematics forms a minor portion of the HS GPA for most students.  Although a small portion of students might have 50% of their HS GPA based on mathematically intense courses, the mode is probably closer to 20%.  Therefore, it is not surprising that the HS GPA is not a strong indicator of readiness for a given college mathematics course.

My college has recently implemented a policy to allow students with a HS GPA 2.6 or higher to enroll in our quantitative reasoning course, regardless of any math measures.  The first semester of data indicates that there may be a problem with this … about a third of these students passed the course, compared to the overall pass rate of about 75%.

I suggest that the meaning of the HS GPA is that the measure can identify students at risk, who perhaps should not be placed in college level math courses even if their test scores qualify them. In some cases, “co-requisite remediation” might be appropriate; in others, stand-alone developmental mathematics courses are more appropriate.  My conjecture is that this scheme would support student success:

  • Qualifying test score with HS GPA > 3.00, place into college mathematics
  • Qualifying test score with 2.6≤HS GPA<3.0, place into co-requisite if available, developmental if not
  • Qualifying test score with HS GPA < 2.6, place into developmental math

This, of course, is not what the “policy influencers” want to hear (ie, complete college america and related groups).  They suggest that we can solve a problem by both ignoring prior learning of mathematics and applying bad statistics.  Our responsibility, as professionals, is to articulate a clear assessment based on evidence to support the success of our students in their college program.

 

Why Do Students Have to Take Math in College?

The multiple-measures and co-requisite trends (fads, if you will) continue to gain share in the market.  Results are generally positive, and more laws are passed limiting (or eliminating) remedial mathematics in colleges.  Given the talk on these issues, I have to wonder … why do we require students to take a mathematics course in college?

Clearly, I am not raising this question relative to STEM or STEM-ish programs that some students follow; their need for mathematics is clearly logical (though that experience needs to be more modern than they usually experience).  These students normally proceed through some sequence of mathematics, whether 2 courses or 10.  No, the question is relative to programs or institutions which require one math course, usually a general education course.

Those general education math courses are often very close in rigor to high school courses common in the United States at this time; I’ll provide a specific rubric for that statement below.  “College Algebra”, the disaster that it is, happens to be pretty close to the algebra expectations in the Common Core standards; the details differ, but the level of expectations are very similar.  “Statistics”, at the intro level, is again similar to those expectations; even some of the intro stat outcomes are in the Common Core.  Liberal Arts math has topics not normally found in K-12 mathematics, but the level of rigor is generally quite low.  Quantitative Reasoning (QR) has some potential for exceeding the high school level, but most of our QR implementations are very low on rigor.  See https://dcmathpathways.org/resources/what-is-rigor-in-mathematics-really for a good discussion of ‘rigor’ as I use the word in this post.

Do we require a math course in college as a means to remediate the K-12 mathematics students “should have had”?  Or, do we require a math course in college in order to advance the student’s education beyond high school?

Those questions seem central to the process of considering those current trends.  The high school GPA, the cornerstone of most multiple measures, has a trivial correlation with mathematical abilities but a meaningful correlation to college success; if the college math course is essentially at the high school level, then using the GPA for placement is reasonable.  Co-requisite remediation can address missing skills but not a lack of rigor (in general); if the college math course is essentially at the high school level, there is little risk involved from using co-requisite remediation.

On the other hand, if we require a math course in order to extend the student’s education beyond high school, neither multiple measures nor co-requisite remediation will dramatically decrease the need for stand-alone remediation.  K-12 education does not work that effectively; prohibiting stand-alone remediation in college will punish students for a system failure.  Our ‘traditional’ math remediation involving three or more levels is also a punishment for students, and can not be justified.

I would like to believe that we are committed to a college education, not just a college credential.

Before we conclude that multiple-measures and/or co-requisite remediation “work”, we need to validate the rationale for requiring a math course in college for non-STEM students.  A key part of this rationale, in my view, is our community developing a deeper appreciation of the quantitative needs of all disciplines.  Few disciplines have been exempt from the radical increase in the use of quantitative methods, and this is a starting point for ‘why’ require a college math course — as well as the design of such courses.  Most of our current courses fail to meet the needs of our partner disciplines, which means getting more students to complete their math course will have a trivial impact on college success and on occupational success for our students.

If it is important to extend a student’s education beyond the K-12 level, then the ‘rigor’ of the learning is more important than the quantity of topics squeezed in to a given course.  The discussion of rigor cited above is helpful but a bit vague.  Take a look at this taxonomy of learning outcomes:

 

 

 

 

 

This grid is adapted from a document at “CELT” (Iowa State University; http://www.celt.iastate.edu/teaching/effective-teaching-practices/revised-blooms-taxonomy/), and is based on the “revised Bloom taxonomy”.  The revised taxonomy is a significant update published in 2001; one of the authors (Krathwohl) has an article explaining the update (see https://www.depauw.edu/files/resources/krathwohl.pdf ).  The verbs in each cell are meant to provide a basic understanding of what is intended.  [Note that the word “differentiate” is not the mathematical term :).]

Within the learning taxonomy, the columns represent process (as opposed to knowledge).  Those 6 categories are frequently clustered in to “Low” (Remember, Understand, Apply) and “High level” (Analyze, Evaluate, Create); the order of abstraction is clear.  For the knowledge dimension (rows), the sequence is not as clear — though we know that ‘metacognitive’ is higher than the others, and ‘factual’ is the lowest.

In both K-12 mathematics, and the college math courses listed above, most learning is clustered in the first 3 columns with an emphasis on “interpret” and “calculate”.  A direct measure of rigor (“education”) is the proportion of learning outcomes in the high level columns, with possible bonus points for outcomes in Metacognitive.   Too often, we have mistaken “problem complexity” for “rigor”; surviving 20 steps in a problem does not mean that the level of learning is any higher than simple problems.  We need to focus on a system to ‘measure’ rigor, one that can justify the requirement of passing a math course in college.

 

WordPress Themes