Skip to content
Permalink
master
Switch branches/tags
Go to file
 
 
Cannot retrieve contributors at this time
---
title: "Example: textual data visualization"
author: Kenneth Benoit, Adam Obeng, and Stefan Müller
output:
html_document:
toc: true
---
```{r, message=FALSE}
library("quanteda")
library("quanteda.textplots")
```
This vignette walks through various plot options available in **quanteda.textplots** through the `textplot_*` functions.
# Wordcloud
The frequency of features can be plotted as a wordcloud using `textplot_wordcloud()`.
```{r eval=TRUE, fig.width=8, fig.height=8}
dfm_inaug <- corpus_subset(data_corpus_inaugural, Year <= 1826) %>%
dfm(remove = stopwords('english'), remove_punct = TRUE) %>%
dfm_trim(min_termfreq = 10, verbose = FALSE)
set.seed(100)
textplot_wordcloud(dfm_inaug)
```
You can also plot a "comparison cloud", but this can only be done with fewer than eight documents:
```{r eval=TRUE, fig.width=10, fig.height=8}
corpus_subset(data_corpus_inaugural,
President %in% c("Washington", "Jefferson", "Madison")) %>%
tokens(remove_punct = TRUE) %>%
tokens_remove(stopwords("english")) %>%
dfm() %>%
dfm_group(groups = President) %>%
dfm_trim(min_termfreq = 5, verbose = FALSE) %>%
textplot_wordcloud(comparison = TRUE)
```
Plot will pass through additional arguments to the underlying call to `wordcloud`.
```{r eval=TRUE, fig.width=8, fig.height=8}
textplot_wordcloud(dfm_inaug, min_count = 10,
color = c('red', 'pink', 'green', 'purple', 'orange', 'blue'))
```
# Lexical dispersion plot
Plotting a `kwic` object produces a lexical dispersion plot which allows us to visualize the occurrences of particular terms throughout the text. We call these "x-ray" plots due to their similarity to the data produced by [Amazon's "x-ray" feature for Kindle books](https://en.wikipedia.org/wiki/X-Ray_(Amazon_Kindle)).
```{r eval=TRUE, fig.width=8, fig.height=4}
data_corpus_inaugural_subset <-
corpus_subset(data_corpus_inaugural, Year > 1949)
kwic(tokens(data_corpus_inaugural_subset), pattern = "american") %>%
textplot_xray()
```
You can also pass multiple kwic objects to `plot` to compare the dispersion of different terms:
```{r eval=TRUE, fig.width=8, fig.height=4}
textplot_xray(
kwic(data_corpus_inaugural_subset, pattern = "american"),
kwic(data_corpus_inaugural_subset, pattern = "people"),
kwic(data_corpus_inaugural_subset, pattern = "communist")
)
```
If you're only plotting a single document, but with multiple keywords, then the keywords are displayed one below the other rather than side-by-side.
```{r eval=TRUE, fig.width=8, fig.height=1.25}
require(readtext)
data_char_mobydick <- texts(readtext("http://www.gutenberg.org/cache/epub/2701/pg2701.txt"))
names(data_char_mobydick) <- "Moby Dick"
textplot_xray(
kwic(tokens(data_char_mobydick), pattern = "whale"),
kwic(tokens(data_char_mobydick), pattern = "ahab")
)
```
You might also have noticed that the x-axis scale is the absolute token index for single texts and relative token index when multiple texts are being compared. If you prefer, you can specify that you want an absolute scale:
```{r eval=TRUE, fig.width=8, fig.height=4}
toks <- tokens(data_corpus_inaugural_subset)
textplot_xray(
kwic(toks, pattern = "american"),
kwic(toks, pattern = "people"),
kwic(toks, pattern = "communist"),
scale = "absolute"
)
```
In this case, the texts may not have the same length, so the tokens that don't exist in a particular text are shaded in grey.
## Modifying lexical dispersion plots
The object returned is a ggplot object, which can be modified using ggplot:
```{r eval=TRUE, fig.width=8, fig.height=4}
library("ggplot2")
theme_set(theme_bw())
g <- textplot_xray(
kwic(toks, pattern = "american"),
kwic(toks, pattern = "people"),
kwic(toks, pattern = "communist")
)
g + aes(color = keyword) +
scale_color_manual(values = c("blue", "red", "green")) +
theme(legend.position = "none")
```
# Frequency plots
You can plot the frequency of the top features in a text using `topfeatures`.
```{r eval=TRUE, fig.width=8, fig.height=4}
library("quanteda.textstats")
features_dfm_inaug <- textstat_frequency(dfm_inaug, n = 100)
# Sort by reverse frequency order
features_dfm_inaug$feature <- with(features_dfm_inaug, reorder(feature, -frequency))
ggplot(features_dfm_inaug, aes(x = feature, y = frequency)) +
geom_point() +
theme(axis.text.x = element_text(angle = 90, hjust = 1))
```
If you wanted to compare the frequency of a single term across different texts, you can also use `textstat_frequency`, group the frequency by speech and extract the term.
```{r eval=TRUE, fig.width=8, fig.height=4}
# Get frequency grouped by president
freq_grouped <- textstat_frequency(dfm(tokens(data_corpus_inaugural_subset)),
groups = data_corpus_inaugural_subset$President)
# Filter the term "american"
freq_american <- subset(freq_grouped, freq_grouped$feature %in% "american")
ggplot(freq_american, aes(x = group, y = frequency)) +
geom_point() +
scale_y_continuous(limits = c(0, 14), breaks = c(seq(0, 14, 2))) +
xlab(NULL) +
ylab("Frequency") +
theme(axis.text.x = element_text(angle = 90, hjust = 1))
```
The above plots are raw frequency plots. For relative frequency plots, (word count divided by the length of the chapter) we need to weight the document-frequency matrix first. To obtain expected word frequency per 100 words, we multiply by 100.
```{r eval=TRUE, fig.width=8, fig.height=4}
dfm_rel_freq <- dfm_weight(dfm(tokens(data_corpus_inaugural_subset)), scheme = "prop") * 100
head(dfm_rel_freq)
rel_freq <- textstat_frequency(dfm_rel_freq, groups = dfm_rel_freq$President)
# Filter the term "american"
rel_freq_american <- subset(rel_freq, feature %in% "american")
ggplot(rel_freq_american, aes(x = group, y = frequency)) +
geom_point() +
scale_y_continuous(limits = c(0, 0.7), breaks = c(seq(0, 0.7, 0.1))) +
xlab(NULL) +
ylab("Relative frequency") +
theme(axis.text.x = element_text(angle = 90, hjust = 1))
```
Finally, `texstat_frequency` allows to plot the most frequent words in terms of relative frequency by group.
```{r eval=TRUE, fig.width=8, fig.height=6}
dfm_weight_pres <- data_corpus_inaugural %>%
corpus_subset(Year > 2000) %>%
tokens(remove_punct = TRUE) %>%
tokens_remove(stopwords("english")) %>%
dfm() %>%
dfm_weight(scheme = "prop")
# Calculate relative frequency by president
freq_weight <- textstat_frequency(dfm_weight_pres, n = 10,
groups = dfm_weight_pres$President)
ggplot(data = freq_weight, aes(x = nrow(freq_weight):1, y = frequency)) +
geom_point() +
facet_wrap(~ group, scales = "free") +
coord_flip() +
scale_x_continuous(breaks = nrow(freq_weight):1,
labels = freq_weight$feature) +
labs(x = NULL, y = "Relative frequency")
```
# Plot "keyness" in a target and reference group
If you want to compare the differential associations of keywords in a target and reference group, you can calculate "keyness" which is based on `textstat_keyness`.
In this example, we compare the inaugural speech by Donald Trump with the speeches by Barack Obama.
```{r fig.width=8, fig.height=6}
# Only select speeches by Obama and Trump
pres_corpus <- corpus_subset(data_corpus_inaugural,
President %in% c("Obama", "Trump"))
# Create a dfm grouped by president
pres_dfm <- tokens(pres_corpus, remove_punct = TRUE) %>%
tokens_remove(stopwords("english")) %>%
tokens_group(groups = President) %>%
dfm()
# Calculate keyness and determine Trump as target group
result_keyness <- textstat_keyness(pres_dfm, target = "Trump")
# Plot estimated word keyness
textplot_keyness(result_keyness)
# Plot without the reference text (in this case Obama)
textplot_keyness(result_keyness, show_reference = FALSE)
```
# Plot fitted scaling models
You can also plot fitted Wordscores (Laver et al., 2003) or Wordfish scaling models (Proksch and Slapin, 2008).
## Wordscores
Wordscores is a scaling procedure for estimating policy positions or scores (Laver et al., 2003). Known scores are assigned to so called reference texts in order to infer the positions of new documents ("virgin texts"). You can plot the position of words (features) against the logged term frequency, or the position of the documents.
```{r eval=TRUE, fig.width=8, fig.height=6}
library("quanteda.textmodels")
# Transform corpus to dfm
data(data_corpus_irishbudget2010, package = "quanteda.textmodels")
ie_dfm <- dfm(tokens(data_corpus_irishbudget2010))
# Set reference scores
refscores <- c(rep(NA, 4), 1, -1, rep(NA, 8))
# Predict Wordscores model
ws <- textmodel_wordscores(ie_dfm, y = refscores, smooth = 1)
# Plot estimated word positions (highlight words and print them in red)
textplot_scale1d(ws,
highlighted = c("minister", "have", "our", "budget"),
highlighted_color = "red")
# Get predictions
pred <- predict(ws, se.fit = TRUE)
# Plot estimated document positions and group by "party" variable
textplot_scale1d(pred, margin = "documents",
groups = docvars(data_corpus_irishbudget2010, "party"))
# Plot estimated document positions using the LBG transformation and group by "party" variable
pred_lbg <- predict(ws, se.fit = TRUE, rescaling = "lbg")
textplot_scale1d(pred_lbg, margin = "documents",
groups = docvars(data_corpus_irishbudget2010, "party"))
```
## Wordfish
Wordfish is a Poisson scaling model that estimates one-dimension document positions using maximum likelihood (Slapin and Proksch, 2008). Both the estimated position of words and the positions of the documents can be plotted.
```{r eval=TRUE, fig.width=8, fig.height=6}
# Estimate Wordfish model
library("quanteda.textmodels")
wf <- textmodel_wordfish(dfm(tokens(data_corpus_irishbudget2010)), dir = c(6, 5))
# Plot estimated word positions
textplot_scale1d(wf, margin = "features",
highlighted = c("government", "global", "children",
"bank", "economy", "the", "citizenship",
"productivity", "deficit"),
highlighted_color = "red")
# Plot estimated document positions
textplot_scale1d(wf, groups = data_corpus_irishbudget2010$party)
```
## Correspondence Analysis
You can also plot the estimated document positions of a correspodence analysis (Nenadic and Greenacre 2007).
```{r eval=TRUE, fig.width=8, fig.height=6}
# Transform corpus to dfm
ie_dfm <- dfm(tokens(data_corpus_irishbudget2010))
# Run correspondence analysis on dfm
ca <- textmodel_ca(ie_dfm)
# Plot estimated positions and group by party
textplot_scale1d(ca, margin = "documents",
groups = docvars(data_corpus_irishbudget2010, "party"))
```
# References
Laver, Michael, Kenneth Benoit, and John Garry. 2003. "[Extracting Policy Positions from Political Texts Using Words as Data](http://kenbenoit.net/pdfs/WORDSCORESAPSR.pdf)." _American Political Science Review_ 97(2): 311-331.
Nenadic, Oleg and Michael Greenacre. 2007. "[Correspondence analysis in R, with two- and three-dimensional graphics: The ca package](http://www.jstatsoft.org/v20/i03/)." _Journal of Statistical Software_ 20(3): 1--13.
Slapin, Jonathan and Sven-Oliver Proksch. 2008. "[A Scaling Model for Estimating Time-Series Party Positions from Texts](https://doi.org/10.1111/j.1540-5907.2008.00338.x)." _American Journal of Political Science_ 52(3): 705--772.