Math – Applications for Living VII

Our class, Math119 (Math – Applications for Living) is in the middle of our work on statistics.  The last class included finding a margin of error and a confidence interval for a poll … like those pesky political polls we are constantly hearing about. 

So, here is the situation.  This month’s poll showed 63% of respondents supported one candidate, based on results from 384 people; last month, the same poll reported 58% supported that candidate.  The article stated that the candidate is enjoying the increased support … is that a valid conclusion?

As you know, this relates to two issues.  First, the standard error for a proportion like this is found with the statistical formula:   \text{Standard error} = \sqrt{\frac{p(1-p)}{n}}=\sqrt{\frac{p-p^2}{n}}

Tests of significance are then based on z values for a normal distribution; the most common reference is z = 1.96 … creating a margin of error representing a 95% confidence interval.

In our class (Math119), we use a quick rule of thumb to combine these two ideas into one statement which just uses the sample size — and this rule of thumb works pretty good for the types of proportions normally seen in polls (p values between 10% and 90%).  The rule of thumb for the margin of error is just the reciprocal of the square root of the sample size     

 

For the poll data, the sample sizes are both about 400.  The rule of thumb gives an estimate of 5%, which is very close to the actual value (approximately 4.7%.  In our class, we make a reference to the presence of the more accurate formula, but we use only this rule of thumb.

In this poll example, we create the confidence interval … and conclude that there is no significant difference between the polls.  The confidence intervals overlap; even though the new poll has a larger number, it is not enough of an increase to be significant (with this sample size).

We also have talked about selection bias and other potential problems with polls, and have begun the process of thinking about the impact of sample sizes on things being ‘significant’ (whether they are meaningful or not).

 
Join Dev Math Revival on Facebook:

Education as Transformation

Much is made these days of ‘value-added’, including the use of student ‘gains’ on standardized tests in the evaluation of teachers.  In colleges, we have defined courses in terms of student learning outcomes … which might reflect a comparable view of higher education (similar to K-12 and emphasis on ‘skills’).

“It must be remembered that the purpose of education is not to fill the minds of students with facts…it is to teach them to think.”  [Robert M. Hutchins]

What is the primary mission of colleges?  We all want our students to get better jobs, and would also like them to have a better quality of life.  Can these goals be achieved by the accumulation of discrete skills and learning outcomes?

Education is what remains after one has forgotten what one has learned in school. [Albert Einstein]  

Community colleges tend to serve the less-empowered segments of society.  People often cite mathematics as a key enabler of upward mobility, with some demographic studies to support this position.  These correlational studies produce a false impression of the processes involved.   The motto is not ‘algebra for all’ … the motto is ‘building capacity to learn and function’.

Education… has produced a vast population able to read but unable to distinguish what is worth reading. [G.M. Trevelyan]

Education should be a transformative experience.  Independent thinking, reasoning with a variety of methodologies (including quantitative), and clear communication should be evidence of this transformation.  In a community college, we can not strive for the same level of transformation as a university or liberal arts college education; however, we stand in the critical first steps for students along this path.

Education is the ability to listen to almost anything without losing your temper or your self-confidence. [Robert Frost]

In developmental mathematics, we have too often been content to provide little snippets of essentially useless knowledge — procedures to deal with a variety of calculations.  Even though it is not easy, and there is always a discomfort involved, our students are capable of much more.  Without reasoning and clear communication, these procedures will not benefit students (beyond a data bit that says they ‘passed math’).

Education is not filling a pail but the lighting of a fire. [William Butler Yeats]

As we work together to build a better model for developmental mathematics, we need to appreciate our place in the education of our students.  A good mathematics course produces a qualitative change in students. We can measure some aspects of this process by examining the reasoning and communication processes that students use.  However, there is no sure-fire and objective measure that says  a student has made progress.  We will develop better tools for this — including some focused on quantitative literacy and reasoning.  The challenges of measurement should prevent us from keeping our proper focus; we need to work to make the important measurable.

Education is the key to unlock the golden door of freedom. [George Washington Carver]

The pre-algebra/introductory algebra/intermediate algebra model of developmental mathematics needs to be re-made into a valid curriculum.  We can include mathematics that is practical, and that is an improvement — however, it  is not sufficient.  A central goal of developmental mathematics needs to be the improvement of quantitative reasoning and communication … a contribution which will enable our students to be educated, free people in a world facing diverse challenges.

 
Join Dev Math Revival on Facebook:

Math – Applications for Living VI

Statistics in the news!!

The report (from the BBC) cites a study which says “55% of Syrians think President Assad should stay”.  The source is http://www.bbc.co.uk/news/magazine-17155349

This is a case where the journalist actually did a good job with the statistics.  The quote above comes from an online survey done with about 1000 respondents in the middle east and north Africa.  The article describes several statistical problems with the conclusion.  Among them:

  • Syrians live (generally) in Syria … the report did not state how many were actual Syrians or lived there.  One reference in the report allows an estimate of about 100.
  • One thousand is a modest number for a survey — this one covers an entire region.
  • One hundred is statistically insufficient to measure the opinions of a country.
  • Few Syrians have internet access; since it was an internet survey, the Syrians who did respond are not likely to be representative of all Syrians.

Curiously, our Math — Applications for Living class (Math119) yesterday covered a ‘rule of thumb’ for the margin of error for polls like this.  For most polls, the quick little formula 1/√n (reciprocal of the square root of the sample size) is surprisingly accurate.   For the 100 Syrians actually included in the results, the margin of error is 10% .. the true population parameter would be between 45% and 65% (most likely!).

The sad part of this story is that the original story on this survey did not provide this more complete context for the results.  Take a look at the BBC report for more information on that.  One can only hope that the bad use of statistics does not contribute to an already bad situation.

Join Dev Math Revival on Facebook:

Placement Tests – Valid?

Almost all community colleges use a placement test to identify students who need a developmental course.  Are these tests sufficiently valid for this high-stakes usage?

A recent publication from the Community College Research Center (CCRC) at Columbia College reports on a research study to examine this validity; the report is”Predicting Success in College: The Importance of Placement Tests and High School Transcripts” (CCRC Working Paper No. 42) and is available at  http://ccrc.tc.columbia.edu/DefaultFiles/SendFileToPublic.asp?ft=pdf&FilePath=c:\Websites\ccrc_tc_columbia_edu_documents\332_1030.pdf&fid=332_1030&aid=47&RID=1030&pf=Publication.asp?UID=1030

I’ve spent a little time looking through this study.  One data bit is creating quite a bit of interest … a statement that the two major tests (Compass, Accuplacer) have ‘severe error rates’ of 15% to 28%.  By severe error, they mean either of these situations:  (1) The placement test directs a student to a developmental course when the prediction is actually that they would pass the college level course or (2) The placement test directs a student to the college level course when the prediction is that they would fail.

The methodology in the study begins with the assumption that the placement test score measures degrees of readiness, not just a ‘yes’ or ‘no’ (binary) result.  Using data from a state-wide community college system, the authors correlate the placement test scores with whether students actually passed the math course (either a developmental course or college level) to create a probability value.  Since the colleges involved did not generally allow students with scores below a cutoff to take the college level course, they extrapolated to estimate the probability below the cutoff; a similar approach was done for a probability of passing the developmental course for scores above the cutoff.  For each placement test, the study includes between 300 and 800 students.

Using these models for the probabilities, the authors then calculate the severe error rate cited above.  The values shown were for mathematics — the ‘english’ rates for severe errors were slightly higher (27% to 33%).

Separate from that severe error rate, the study showed the ‘accuracy rate’ for each test for the courses; these accuracy rates reflect the pass rates for those above the cutoff and the failure rate for those below the cutoff.  These values range from 50% to 60% in math (for receiving a C grade or better).

The research also examined the relationship of high school performance to both this placement question and to general college success, and they conclude that the high school GPA is the single best predictor — even for predicting who needs a developmental course.

Several things occur to me relative to this study.  First of all, any measurement has a standard error; in the case of Accuplacer, this standard error varies with the score — for middle value scores (like 60 to 80), the standard error is about 10.  If a student scores 69 when the cutoff is 76, there is some chance that the score is ‘on the wrong side’ of the cutoff just due to the standard error of the measure.  In my experience, this standard error results in something like 10% of what the authors call ‘severe error’.  The main methodology to minimize this source of error is to have repeated measures — like having students take the placement test twice. 

Another thought … the report does not identify the math courses involved, nor the cutoff used.  Most results are given for “math 1” and “math 2”; the predictability of readiness is not uniformly distributed, and is more difficult when there are different levels of expectation (reasoning, abstraction, problem solving, etc) in two levels of courses.   Since the report does not identify which type of severe error is contributing the most to the rate, it is possible that the cutoff itself is what contributes to the severe error rate that is beyond the standard error

Though I doubt if many of us would, the use of high school GPA as a placement measure seems both awkward and risky.  We would need to replicate this study in other settings — other states and regions — to see if the same pattern exists.   Even if that result is validated, the use of a composite measure of prior learning raises issues of equity and fairness; applying this to individual students may produce varying results by student characteristics (even more than the placement tests).

The other thought is that a hidden benefit in this report is a comparison of the two primary tests (Accuplacer, Compass) for various measures of validity.  For example, the Accuplacer accuracy rates in math were somewhat higher than those for Compass.

Overall, I do not see this study raising basic questions about our use of placement tests. 

 
Join Dev Math Revival on Facebook:

WordPress Themes