Seminar

Text Analytics for Social Data Using DiscoverText & Sifter

Date
Tue January 17th 2017, 12:00am
Event Sponsor
Stanford Institute for Research in the Social Sciences, Stanford University Libraries

Presenter: Stuart W. Shulman

Registration will close the day of the event, 1/17/17.

Three sessions will be offered: 

  • Session A: 10-11:30 a.m.
  • Session B: 1:30-2:50 p.m.
  • Session C: 3:15-4:45 p.m.

Location:

  • All sessions will be held in Green Library, Bing Wing, Social Science Resource Center (SSRC) Seminar Room #121A.

(note that previously there were two locations for the sessions)

About

Participate in this workshop to learn how to build custom machine classifiers for sifting social media data. The topics covered include how to:

  • construct precise social data fetch queries,
  • use Boolean search on resulting archives,
  • filter on metadata or other project attributes,
  • count and set aside duplicates, cluster near-duplicates,
  • crowd source human coding,
  • measure inter-rater reliability,
  • adjudicate coder disagreements, and
  • build high quality word sense and topic disambiguation engines

DiscoverText is designed specifically for collecting and cleaning up messy Twitter and other text data streams. Use basic research measurement tools to improve human and machine performance classifying data over time. The workshop covers how to reach and substantiate inferences using a theoretical and applied model informed by a decade of interdisciplinary, National Science Foundation-funded research into the text classification problem.

Participants will learn how to apply “CoderRank” in machine-learning. Just as Google said not all web pages are created equal, links on some pages rank higher than others, Dr. Shulman argues not all human coders are created equal; the accuracy of observations by some coders on any task invariably rank higher than others. The major idea of the workshop is that when training machines for text analysis, greater reliance should be placed on the input of those humans most likely to create a valid observation. Texifter proposed a unique way to recursively validate, measure, and rank humans on trust and knowledge vectors, and called it CoderRank.

Bio

Dr. Stuart W. Shulman is founder & CEO of Texifter.  He was a Research Associate Professor of Political Science at the University of Massachusetts Amherst and the founding Director of the Qualitative Data Analysis Program (QDAP) at the University of Pittsburgh and at UMass Amherst. Dr. Shulman is Editor Emeritus of the Journal of Information Technology & Politics, the official journal of Information Technology & Politics section of the American Political Science Association.