Curious about what went wrong with the polls this election season? On Tuesday evening, Bwog Staff Writer Lexie Lehmann attended Columbia Data and Society Task Force’s event, “Data, Polling, the Media and Democracy: A Panel Discussion of Election 2016”. Here are the highlights from the forum.
On Tuesday, Columbia’s Data and Society Task Force, sponsored by the Data Science Institute, presented a panel discussion of this year’s election in the Rotunda of Low Library. In the days leading up to the event, my Facebook buzzed with posts by people trying to snag tickets and weasel their way in. For those who, like myself, kept up obsessively and painstakingly with this year’s election, the event boasted a big name: Nate Silver, founder and Editor in Chief of FiveThirtyEight. If you haven’t heard of it, FiveThirtyEight is a website that provides data-driven and polling-based coverage of politics, sports, economics, and other popular topics. Since correctly predicting the 2008 presidential election results in 49 of the 50 states, FiveThirtyEight’s coverage has been lauded as one of the most accurate and comprehensive polling resources available. Yet, on that that fateful day of November 8, 2016, Nate’s estimate that Hillary Clinton had a 71.4% chance of winning let the entire country down. The polls, it seemed, had failed us.
To say that I was eager to hear what Nate had to say was an understatement, and I don’t think I was the only person in the Rotunda with those emotions. I turned up to the 5:30pm event 25 minutes early (to get a good spot!!), and the room was already half full. Everyone was bustling in their seats; my friends, and my equally-as-eager mother, texted me demanding updates. Personally, I was anxious to hear answers. In the days leading up to the election, I refreshed Nate’s estimates every morning and every night. I relied on FiveThirtyEight’s insight to give me a broader perspective on the nation’s leaning, beyond my Facebook feed and the Columbia bubble. I was devastated when his outcomes failed.
The debate also boasted other impressive speakers. Emily Bell of Columbia’s Journalism School is a leading thinker, commentator, and strategist in digital journalism. In the past, she has served as the editor-in-chief of Guardian News and Media as well as their director of digital content. Robert Shapiro of Columbia’s Political Science department is an expert in American politics. He has completed groundbreaking research on public opinion, policy making, political leadership, and the applications of statistical methods. Truly, Nate Silver was not in poor company. The debate was moderated by Ester Fuchs, director of the Urban and Social Policy Program at Columbia’s SIPA.
The program begun with opening remarks from David Madigan, Executive President for Arts and Sciences and a Professor of Statistics at Columbia. Afterward, Fuchs asked the question looming on everyone’s minds: So what happened with the election? Why did the polls seem so wrong? Nate answered first, strongly on the defensive. He explained that with any poll, the outcome predicted still has a large range of uncertainty. From the beginning, FiveThirtyEight saw the campaign differently — it wasn’t a matter of making bets and correctly picking a winner, but rather correctly interpreting the available polling information to accurately assess the odds. With that goal, FiveThirtyEight succeeded; the numbers were only narrowly off their estimates, yet unfortunately the points they missed occurred in the swing states where Hillary Clinton had the most to lose. And ultimately, according to Silver, these errors were the product of relatively technical problems and a slightly overconfident model.
Instead, Silver redirected the question: Why did the people think the polls were so certain? He criticized both camps for being too decisive with their rhetoric too early on, referencing a New York Times article he read that referred to Clinton’s campaign team as her “administration-in-waiting”. He also faulted the media for not reporting the polls accurately, arguing that the general public requires a basic literacy of statistics in order to understand what a 71.4% chance of winning does and does not mean. Without context, the narrative of Hillary being a clear favorite in the election may have disrupted the reality that there was still a lot of uncertainty and room for change. Accordingly, from the beginning of the election, the electoral college never favored Hillary Clinton; therefore, despite the point-gains she may have made among certain demographics in states like Texas and Pennsylvania, those leaps were not enough to influence the electors’ votes. In addition, Silver believes that the public and media may have used the volume of evidence of polling evidence to overcompensate for the precision of the evidence. “A two-point lead, even if called by 10 polls, is still only a two point lead”, he stated.
Emily Bell was next to respond, answering the questions: How was this election different in terms of its media coverage? Do pollsters bear some responsibility for the election results, or is it the media to blame? Bell was also quick to point out the “fake news” narrative of Hillary Clinton being the election’s presumed winner, yet she made sure to compliment how well, on the whole, she felt the election was covered. In the election, nothing was spared from a story; all aspects of both candidates were reported on and discussed in full. In addition, Bell noted some extraordinary feats of investigative journalism that were achieved in this elections cycle. She referenced one story she read that included personal interviews from West Virginia voters on why they voted for Barack Obama in 2012, but planned on voting for Trump this year. Nevertheless, Bell placed blame on news aggregators like Facebook and Twitter for disrupting the path between story and audience. “We need to learn to adapt to these new platforms”, she cautioned.
To Schapiro, Fuchs directed the question: What went wrong in terms of political science predictions and theories? And more specifically, what happened to the white women? (referring to the large demographic of white women that chose Donald Trump over Hillary Clinton). Schapiro faulted state poll reporting, noting that while the national polls were relatively on par and frequently updated, certain state polls (such as Wisconsin) were few and far between. As a solution, Schapiro suggested increasing funding for the polls, and more thoroughly examining where the poll data is coming from and how it is being collected.
To the entire panel, Fuchs next presented the idea that the polls may have influenced the outcome of the election. Silver, once again, was quick to defend his data science. He argued that the polls themselves are not flawed, but that people crave certainty. In response, he claimed that people should be more uncertain; a strange thought that he believes is only supplemented by a lack of trust in journalism. The polls, he asserted, are an easy but incorrect target to blame.
Bell countered with the argument that there has always been a lack of trust in journalism, that this is not a new idea, but rather that now — due to the multiplicity of news sources and a variety of different biases affecting those news sources — people have more of an opportunity to place trust in their own press, and to distrust the press that they do not like. Therefore, she believes, journalists need to critically examine the ways in which they present and communicate data. Facebook, Twitter, and other social medias will need to play a key part in this discussion.
Next, Schapiro was asked, in a question posed by Fuchs from an undergraduate student, if he felt that this election was an assault on the discipline of political science. The question referenced the Princeton University political science professor who gave Clinton a 99% chance of winning. Now it was Schapiro’s turn to be defensive, reminding the crowd that for all that political scientists got wrong in this election, many got several things right as well. It was well accepted that 2016 would be a good year for a win from the Republican party, as evidenced by the abundance of GOP candidates that emerged during the primaries and the increased availability of campaign funds as a consequence of Citizens United. Silver added to the conversation, supplementing that there was a great number of undecided voters in the election that were up in the air from the beginning; that was clear from the primaries. Accordingly, the stable methods that correctly predicted the 2008 and 2012 elections were not a sure-fire way of predicting 2016’s more volatile campaign season.
As the event wrapped up, each panelist made their final statements. Ultimately, I took away the conclusion that less data coverage is never good, that the polls are not broken and neither is journalism. Instead, the pre-existing conventions of election science and the tools for analyzing that science have fundamentally changed. While polls may provide essential insight to what the general public is thinking at any given time, there is still a margin of error, a realm of uncertainty. In response, journalists should more accurately report the polls; rather than boasting numbers to promote a misleading narrative, the media should provide appropriate context for the data.
Despite leaving the event with more questions than answers, I still felt rather satisfied. I was comforted by a thought from Bell, who added that this election has resulted in an unprecedented, albeit necessary, period of critical self-examination. I agree. As much as we may seek answers, there may just as well be no answers. Thus, in our process of moving forward, we must use this time of self-examination to stay informed, to stay involved, and to continue to engage with these important thoughts.
image via bangordailynews.com