OPINION

BRENDA LOOPER: Polls apart

Not all created equal

I'm sure some people must think I spend the bulk of my time hanging out on Internet comment boards. I'll admit I spend more time checking them out than I should for the sake of my blood pressure, but it doesn't take long to get a read on what people are ticked off about.

What often has my eyes rolling is the schizophrenic treatment of polls from people who have very little understanding of what they actually are and can do--if they're positive for their guy, the pollsters can do no wrong, but if they're negative, all pollsters should be hung by their pinky fingers.

True, some polling outfits are terrible and perhaps pinky nooses should be prepared. Those would mostly be polls that, for example, use too small a sample or a nonrepresentative sample, employ online-only opt-in polling that enables people to weigh in multiple times, or that use questions designed to lead to predetermined results. But the majority of old hands in the polling game are responsible and transparent, and do valuable service.

Yet so many pollsters are castigated for not reflecting what hyperpartisans think they should.

There's a reason Gallup dropped out of the prediction part of election polling in 2015, choosing instead to focus on how voters felt about issues. As Time's Daniel White wrote at the time: "When it comes to election polling, it's the best of times and the worst of times. On the positive side, there is more polling than ever from private universities, news media and small independent shops. Sites like HuffPost Pollster, RealClearPolitics and FiveThirtyEight also provide sophisticated analysis of what the polls mean. On the negative side, the glut of polls often doesn't add up to much, while problems with getting accurate results are starting to hurt the polling industry's reputation."

When your no-account brother-in-law starts a poll gathered from talking to his beer buddies, of course it's going to make all polling look bad.

What many people get wrong about the polls in the last election is that most established polls were accurate within the margin of error on the popular vote count, and that--not the electoral college result--is what those national polls measure. To gauge the electoral college count, Frank Newport of Gallup said, you would need to rely more on state-level polling in swing states, but that polling can have its own accuracy issues (sample size, quality, etc.). Trying to predict how people will vote can also be brought low by unexpected election-day turnouts, or people who don't know or don't want to say who they're voting for.

In a close race like this last one, especially with two such unlikable candidates (yet still more likable than Congress or Vladimir Putin), you have to remember that polls, which capture how respondents feel at a particular moment in time, don't deal in certainties, but rather probabilities. As Bill Whalen, a research fellow at Stanford's Hoover Institution, said after the election, "Ultimately, pollsters are not Nostradamus."

Yeah, I know, hard to believe. Maybe that's why outfits like RealClearPolitics and FiveThirtyEight aggregate and average polls, and generally can be a bit more accurate. Of course, if you don't care about accuracy ... well, you're probably the people annoying me on those comment boards. My boy is glaring at you from cat heaven right now.

So how can you tell if a poll is good or bad? There's too much to be covered in this space, but most good polls have some things in common, including being transparent on methodology and questions when reporting results.

Writing on the Post Calvin blog, Ryan Struyk (a fellow nerd, and a data reporter for CNN) said the building blocks of good opinion polls include whether the polls randomly select participants (the preferred method) or the participants select themselves. Self-selection typically happens with online opt-in polls, and is more likely to skew results. Whether live or automated interviews are used is also a consideration, as it's easier to lie to a machine, and it's illegal in most cases to robo-dial cell phones, so anyone who only has a cell phone wouldn't be able to participate.

One should also consider how phone numbers for the poll are picked--the best coverage comes, Struyk wrote, from random-digit dial-ing to blocks of known residential numbers. Polls that use only numbers from voter registration are more problematic; as we've seen from voter rolls in Arkansas and elsewhere, clearing out old and incorrect information can be a massive task.

Weighting of data is also sometimes necessary to account for differences in the samples to match census demographics. Struyk noted that really good polls use an "iterative weighting model" to weight individual participants, perhaps by age and gender. He cautioned against weighting political partisanship.

And about that margin of error, Struyk wrote: "You just need a few hundred people to get a pretty good picture of what the whole country looks like if you have good sampling--and that's probably why you've never been called for a poll. But the more people you ask, the more exact your answer is going to be. So the margin of error says, hey, we know we are pretty close."

So the next time someone complains about a poll and says he's never been called, you know he has no idea how polls are done. Now stop rolling your eyes until you get away.

------------v------------

Assistant Editor Brenda Looper is editor of the Voices page. Read her blog at blooper0223.wordpress.com. Email her at blooper@arkansasonline.com.

Editorial on 10/11/2017

Upcoming Events