How to work with Google n-gram data sets in R using MySQL
Google Ngram is a corpus of n-grams compiled from data from Google Books. Here I’m going to show how to analyze individual word counts from Google 1-grams in R using MySQL. I’ve also written an R script to automatically extract and plot multiple word counts. To read more about the datasets go to: http://books.google.com/ngrams/datasets. Of course, one could just use Google Ngram Viewer but what’s the fun in that? And it won’t really give the output that I’m looking for. Since it’s case sensitive queries like “psychotherapy” and “Psychotherapy” will give different results. Using R one can combine match counts regardless of case lettering and display the results in a more intuitive way using
ggplot2. If you’re not interested in the technical aspects of this post, you could just jump to the end of it to view an example of different applications of the n-gram database.
First you need to install and setup MySQL on your system. I’m on Mac OS and it was really straightforward to get MySQL up and running. Here’s the documentation on how to do it on Mac OS.
Go to http://books.google.com/ngrams/datasets and get the data files for Google 1-gram [highlight]files 0-9[/highlight]. After you’ve downloaded the files unzip them.
Since I figured it would take a couple of hours to build the database I first combined all 10 files into one
cat in Terminal:
Since I’m not really well versed in working with MySQL I used a free GUI (Sequel Pro) to create and import the data. I setup my DB like this:
And imported newly created CSV-file into this structure. I figured it took about 8 hours to build it on my 2,4 GHz Core 2 Duo iMac from 2009, but I didn’t time it. The resulting database contained 470 million rows and landed at 24 GB using InnoDB indexing.
I’m using the
RMySQL-package to get data from MySQL into R. I wrote a function that accepts search terms and fetches the matching results from my Google 1-gram database. I’ve masked my user and password, so you’ve got to change ‘user=”*”, password=”*”` to your own user name and password.
MySQL is well optimized to handle
OR statements, and it’s a lot faster to send all terms in the same query then to send new queries for each term. Consequently I needed a function that would write out my MySQL query combining the different search terms used. Like this:
Which I then put into the function that connects to MySQL. Using
system.time() [highlight]I clocked the run time to about 15 minutes[/highlight] independent of how many search terms I used. I would say that’s pretty decent considering it’s ~470 million rows of data, that I’m hosting on an external FireWire 800 drive.
The MySQL query created a data frame with “n_gram”, “year” and “match_count”. However, since the raw data is case sensitive there are a lot of duplicates with just different lower- and uppercase configurations. Therefore I created a function to combine all 1-grams regardless of letter casing. Google’s n-gram database is not perfect, so sometimes you fetch OCR-errors with your query. I had to add some code to get rid of those erroneous words otherwise
tolower() would return an error and the script would stop.
To create the final data frame I run
CreateDf() for each query term and combine them into one data frame with
ldply(). Lastly I import data containing total counts for each year, which allows me to calculate relative values for each n-gram.
This is what the raw data look like.
Here I added a smoothing function and ran some more queries.
Well, yes, this is exactly like using Google Ngram Viewer except with sexier graphics. However, you could do much more with this data than with Google Ngram Viewer. One could, for instance, aggregate the data with another data set. For example I could combine “socialism” and “capitalism” with data about which US political party were in power at that time. If you have more computer power than I do you could work with 2-9-grams and generate much cooler data.
I actually had to write a function to get
direct.labels() to display annotations after the smoothed curve instead of after the line of the raw data.
I added party data using an example in the book ggplot2: elegant graphics for data analysis. Which I just added to the previous syntax already saved in
Published April 12, 2012 (View on GitHub)