Applied Scaling & Classification Techniques in Political Science using text data
(academic year 2022/23)
Syllabus
Course aims and objectives
Students will learn how to employ some widely discussed methods advanced in the literature to analyze political texts and to extract from them useful information for texting their own theories.
First Lecture
2/12/22 Theory: An introduction to text analytics
Reference texts: (1; 2, 3)
2/12/22 Lab class: An introduction to the Quanteda package (a) packages to install for Lab 1; scripts: Lab 1 script; datasets: a) Boston tweets sample; b) Inaugural US Presidential speeches sample (to open this file, please use the data compression tool WinRAR); EXTRA: a1) An explanation of cosine similarity; b1) An explanation of the chi-squared; c1) how to deal with Japanese and Chinese languages; d1) sample of Japanese legislatives speeches (to open these files, please use the data compression tool WinRAR)
Second Lecture
9/12/22 Theory: From words to positions: supervised scaling models
Reference texts (1; 2; 3, 4)
9/12/22 Lab class: How to implement the Wordscores algorithm (a) packages to install for Lab 2; b) Lab 2 script: Wordscores; c) dataset for the first part of the Lab; EXTRA: a1) How Wordscores works)
First assignment (due: 16 December 2022) (dataset for Assignment 1. To open this file, please use the data compression tool WinRAR)
Third Lecture
16/12/22 Theory: From words to positions: Unsupervised scaling models
Reference texts (1; 2; 3)
16/12/22 Lab class: How to implement the Wordfish algorithm (scripts: a) packages to install for Lab 3; b) Lab 3 scripts (part I: Wordifsh; part II: Twitter rest API); EXTRA: a1) estimating bootstrap confidence intervals in Wordfish; b1) slides about Wordshoal; c1) estimating Wordshoal)
Second assignment (due: 23 December 2022)
Fourth Lecture
23/12/22 Theory: From words to issues: unsupervised classification models
Reference text (1):
23/12/22 Lab class: How to implement a Topic Model (scripts: a) packages to install; b) Lab 4 script (topic model); c) Twitter streaming api; d) Lab slides; e) dataset to use for Lab 4 - Topic Model: Guardian 2016; geo-data; EXTRA: a1) estimating a cluster model)
Third assignment (due: 6 January 2023)
Fifth Lecture
6/1/23 Theory: (Part 1): From words to issues: structural topic models; (Part 2): Dictionary models
Reference texts (1, 2):
6/1/23 Lab class: How to implement a Structural Topic Model and dictionary models (scripts: a) packages to install; b) Lab 5 scripts (parti I: STM; part II: dictionaries; part III: dictionaries and Twitter); datasets: a) NyT; b) data for topical content analysis); EXTRA: a1) converting an external dictionary to Quanteda; b1) split-half reliability test)
Fourth Assignment (due: 13 January 2023)
Sixth Lecture
13/1/23 Theory: (Part 1): From words to issues: semi-supervised classification models; (Part 2): An introduction to supervised classification models
Reference texts (1, 2, 3, 4):
13/1/23 Lab class: How to implement a semi-supervised classification model (scripts: a) packages to install; b) Lab 6 script; EXTRA a1) computing coherence and exclusivity with keyATM)
Fifth Assignment (due: 20 January 2023)
Seventh Lecture
20/1/23 Theory: From words to issues: supervised classification models
Reference text (1):
20/1/23 Lab class: How to implement supervised classification models (scripts: a) packages to install; b) Lab 7 script; datasets: 1) disaster training-set; 2) disaster test-set:, EXTRA: slides about the meaning of a compressed sparse matrix)
Sixth Assignment (due: 27 January 2023) (datasets for Assignment 7: a) UK training set; b) UK test set)
Eigth Lecture
27/1/23 Theory: (Part 1): Cross validation; (Part 2): The importance of the training set
Reference texts (1; 2,3):
27/1/23 Lab class: How to apply k-fold cross validation (scripts: a) package to install; b) Lab 8 script (part A); c) Lab 8 script (part B); d) Lab 8 script (part C); e) Lab 8 script (part D); f) first training-set for the lab; g) second training-set for the lab
Seventh Assignment (due: 6 February 2023)
Seminar
30/1/23 Theory: A practical introduction to word embedding techniques - and why social scientists should be interested in them
Reference texts (1, 2):
Students will learn how to employ some widely discussed methods advanced in the literature to analyze political texts and to extract from them useful information for texting their own theories.
First Lecture
2/12/22 Theory: An introduction to text analytics
Reference texts: (1; 2, 3)
- Grimmer, Justin, and Stewart, Brandon M. 2013. Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts. Political Analysis, 21(3): 267-297
- Benoit, Kenneth (2020). Text as data: An overview. In Luigi Curini and Robert Franzese (eds.), SAGE Handbook of Research Methods is Political Science & International Relations, London, Sage, chapter 26
- Grossman, Jonathan, and Pedahzur Ami (2020). Political Science and Big Data: Structured Data, Unstructured Data, and How to Use Them, Political Science Quarterly, 135(2): 225-257
2/12/22 Lab class: An introduction to the Quanteda package (a) packages to install for Lab 1; scripts: Lab 1 script; datasets: a) Boston tweets sample; b) Inaugural US Presidential speeches sample (to open this file, please use the data compression tool WinRAR); EXTRA: a1) An explanation of cosine similarity; b1) An explanation of the chi-squared; c1) how to deal with Japanese and Chinese languages; d1) sample of Japanese legislatives speeches (to open these files, please use the data compression tool WinRAR)
Second Lecture
9/12/22 Theory: From words to positions: supervised scaling models
Reference texts (1; 2; 3, 4)
- Laver, Michael, Kenneth Benoit, John Garry. 2003. Extracting Policy Positions from political texts using words as data. American Political Science Review, 97(02), 311-331
- Egerod, Benjamin C.K., and Robert Klemmensen (2020). Scaling Political Positions from text. Assumptions, Methods and Pitfalls. In Luigi Curini and Robert Franzese (eds.), SAGE Handbook of Research Methods is Political Science & International Relations, London, Sage, chapter 27
- Martin, Lanny W., and Georg Vanberg. 2008. A robust transformation procedure for interpreting political text. Political Analysis, 16: 93-100
- Bräuninger Thoams and Nathalier Giger. Strategic Ambiguity of Party Positions in Multi-Party Competition, Political Science Research and Methods, 6(3), 527-548, 2018
9/12/22 Lab class: How to implement the Wordscores algorithm (a) packages to install for Lab 2; b) Lab 2 script: Wordscores; c) dataset for the first part of the Lab; EXTRA: a1) How Wordscores works)
First assignment (due: 16 December 2022) (dataset for Assignment 1. To open this file, please use the data compression tool WinRAR)
Third Lecture
16/12/22 Theory: From words to positions: Unsupervised scaling models
Reference texts (1; 2; 3)
- Proksch, Sven-Oliver, and Slapin, Jonathan B. 2008. A Scaling Model for Estimating Time-Series Party Positions from Texts. American Journal of Political Science, 52(3): 705-722.
- Proksch, Sven-Oliver, and Slapin, Jonathan B. 2009. How to Avoid Pitfalls in Statistical Analysis of Political Texts: The Case of Germany. German Politics, 18(3): 323-344.
- Curini, Luigi, Hino, Airo, and Atsushi Osaki. 2020. Intensity of government–opposition divide as measured through legislative speeches and what we can learn from it. Analyses of Japanese parliamentary debates, 1953–2013, Government and Opposition, 55(2), 184-201
16/12/22 Lab class: How to implement the Wordfish algorithm (scripts: a) packages to install for Lab 3; b) Lab 3 scripts (part I: Wordifsh; part II: Twitter rest API); EXTRA: a1) estimating bootstrap confidence intervals in Wordfish; b1) slides about Wordshoal; c1) estimating Wordshoal)
Second assignment (due: 23 December 2022)
Fourth Lecture
23/12/22 Theory: From words to issues: unsupervised classification models
Reference text (1):
- Robert, Margaret E., Brandon M. Stewart, Dustin Tingley, Christopher Luca, Jetson Leder-Luis, Shana Kushner Gadarian, Bethany Albertson, David G. Rand. 2014. Structural Topic Models for Open-Ended Survey Response. American Journal of Political Science, 58(4), 1064-1082
23/12/22 Lab class: How to implement a Topic Model (scripts: a) packages to install; b) Lab 4 script (topic model); c) Twitter streaming api; d) Lab slides; e) dataset to use for Lab 4 - Topic Model: Guardian 2016; geo-data; EXTRA: a1) estimating a cluster model)
Third assignment (due: 6 January 2023)
Fifth Lecture
6/1/23 Theory: (Part 1): From words to issues: structural topic models; (Part 2): Dictionary models
Reference texts (1, 2):
- Roberts, Margaret E., Brandon M. Stewart, Dustin Tingley. 2014. STM: R Package for Structural Topic Models. Journal of Statistical Software
- Grimmer, Justin, and Stewart, Brandon M. 2013. Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts. Political Analysis, 21(3): 267-297
6/1/23 Lab class: How to implement a Structural Topic Model and dictionary models (scripts: a) packages to install; b) Lab 5 scripts (parti I: STM; part II: dictionaries; part III: dictionaries and Twitter); datasets: a) NyT; b) data for topical content analysis); EXTRA: a1) converting an external dictionary to Quanteda; b1) split-half reliability test)
Fourth Assignment (due: 13 January 2023)
Sixth Lecture
13/1/23 Theory: (Part 1): From words to issues: semi-supervised classification models; (Part 2): An introduction to supervised classification models
Reference texts (1, 2, 3, 4):
- Kohei Watanabe and Yuan Zhou (2020) Theory-Driven Analysis of Large Corpora: Semisupervised Topic Classification of the UN Speeches. Social Science Computer Review, DOI: 10.1177/0894439320907027
- Shusei Eshima, Kosuke Imai, and Tomoya Sasaki (2020). Keyword Assisted Topic Models, arXiv:2004.05964v1
- Grimmer, Justin, and Stewart, Brandon M. 2013. Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts. Political Analysis, 21(3): 267-297
- Curini, Luigi, and Robert Fahey. 2020. Sentiment Analysis. In: Luigi Curini and Robert Franzese (eds.), Sage Handbook of Research Methods in Political Science and International Relations, London: Sage, chapter 29
13/1/23 Lab class: How to implement a semi-supervised classification model (scripts: a) packages to install; b) Lab 6 script; EXTRA a1) computing coherence and exclusivity with keyATM)
Fifth Assignment (due: 20 January 2023)
Seventh Lecture
20/1/23 Theory: From words to issues: supervised classification models
Reference text (1):
- Olivella, Santiago, and Shoub Kelsey (2020). Machine Learning in Political Science: Supervised Learning Models. In Luigi Curini and Robert Franzese (eds.), SAGE Handbook of Research Methods is Political Science & International Relations, London, Sage, chapter 56
20/1/23 Lab class: How to implement supervised classification models (scripts: a) packages to install; b) Lab 7 script; datasets: 1) disaster training-set; 2) disaster test-set:, EXTRA: slides about the meaning of a compressed sparse matrix)
Sixth Assignment (due: 27 January 2023) (datasets for Assignment 7: a) UK training set; b) UK test set)
Eigth Lecture
27/1/23 Theory: (Part 1): Cross validation; (Part 2): The importance of the training set
Reference texts (1; 2,3):
- Olivella, Santiago, and Shoub Kelsey (2020). Machine Learning in Political Science: Supervised Learning Models. In Luigi Curini and Robert Franzese (eds.), SAGE Handbook of Research Methods is Political Science & International Relations, London, Sage, chapter 56
- Barberá, Pablo et al. (2020). Automated Text Classification of News Articles: A Practical Guide. Political Analysis, DOI: 10.1017/pan.2020
- Curini, Luigi, and Robert Fahey. 2020. Sentiment Analysis. In: Luigi Curini and Robert Franzese (eds.), Sage Handbook of Research Methods in Political Science and International Relations, London: Sage, chapter 29
27/1/23 Lab class: How to apply k-fold cross validation (scripts: a) package to install; b) Lab 8 script (part A); c) Lab 8 script (part B); d) Lab 8 script (part C); e) Lab 8 script (part D); f) first training-set for the lab; g) second training-set for the lab
Seventh Assignment (due: 6 February 2023)
Seminar
30/1/23 Theory: A practical introduction to word embedding techniques - and why social scientists should be interested in them
Reference texts (1, 2):
- Rodriguez Pedro L. and Spirling Arthur (2022). Word Embeddings: What works, what doesn’t, and how to tell the difference for applied research, Journal of Politics, 84(1), 101-115
- Wankmüller, S. (2022). Introduction to Neural Transfer Learning With Transformers for Social Science Text Analysis. Sociological Methods & Research https://doi.org/10.1177/00491241221134527