Twitter vote-monitoring effort mainly sound and fury

The ad hoc volunteer election-monitoring effort undertaken on Twitter last week had problems with imprecise methodology and random trolls, so much so that one researcher calls the compiled data "a mess."

A three-page paper from Bob Conrad, a Nevada-based PR person, blogger and doctoral student, looks at last Tuesday's Twitter Vote Report -- a volunteer effort to give voters a place to describe what was happening at the polls on election day. The project allowed observers to send reports in via Twitter, text or phone, with special apps available for the iPhone and Android.

The Twitter Vote Report coordination effort was pretty impressive, especially for a project that started about a month before the election itself, and the theory that crowd-sourcing works was powerfully proven Tuesday night by poll analysis site <1external href="http://www.fivethirtyeight.com/">FiveThirtyEight.com. But a lot depends on how you work that crowd, and Conrad's report (PDF available here) indicates that the gatekeepers in charge of deciding which reports were to be publicly posted were acting, for want of a better word, whimsically.

The gatekeepers approved which Tweets were posted to the Vote Report site. If necessary, they were allowed to add a hashtag that clarified the intent of the post -- for instance, #wait or #registration. They could also "dismiss" posts from showing up in the live stream of comments, especially if they were off-topic -- though deciding what constituted an off-topic post was mainly a gatekeeper judgment call. (Which explains how posts like "@DwayneH dude I love egg salad sandwiches, but liquor store is scary. downtown scarier, even. best of luck. #votereport" made it to the service.) "Dismissing" of data is perhaps useful in a public free-for-all system such as the project was conceived to be, but the thought does tend to make grown statisticians weepy.

Conrad's evaluation of the project found that not only was participation far too low (about 11,000 voters out of a potential 110 million-plus) to provide robust data, but that the ratio of relevant and irrelevant approved tweets was so bad (36.1% were relevant as per the stated rules for the project) that the only conclusion Conrad could draw is that "the data, and their criteria for acceptance, are essentially a mess."

And the users themselves, when they weren't improperly tagging their Tweets, were as whimsical as the gatekeepers. Conrad found that in his home state of Nevada, which had 72 participants, just three users accounted for over two-thirds of the published tweets.

Some participants weren't playing well at all; Twitter user Nico laughed that "Haha, @DwayneH got caught on trolling the #votereport." Add to that the variation in reportage -- the 0-to-100 rating system for "quality of experience" comes to mind -- and the stage was set for something much different than the precise, sabrmetrics-inspired breakdowns of fivethirtyeight.com.

"Had the Twitter Voter Report followed clearly specified criteria for
posting reports and worked with a stronger chain of command in its
organizational structure, it is likely many of these problems would not have occurred," Conrad said in an e-mailed statement.

It's unclear whether the project (or, for that matter, Twitter itself) will be a going concern by the time the 2010 and 2012 elections roll around. But on the whole, political junkies have to be pleased that the election itself went off more smoothly than did the Twitter-based effort to monitor it.

5 Responses to Twitter vote-monitoring effort mainly sound and fury

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.