How to work with Google n-gram data sets in R using MySQL

N-gram lead. By Kristoffer Magnusson

Introduction

Google Ngram is a corpus of n-grams compiled from data from Google Books. Here I’m going to show how to analyze individual word counts from Google 1-grams in R using MySQL. I’ve also written an R script to automatically extract and plot multiple word counts. To read more about the datasets go to: http://books.google.com/ngrams/datasets. Of course, one could just use Google Ngram Viewer but what’s the fun in that? And it won’t really give the output that I’m looking for. Since it’s case sensitive queries like “psychotherapy” and “Psychotherapy” will give different results. Using R one can combine match counts regardless of case lettering and display the results in a more intuitive way using ggplot2. If you’re not interested in the technical aspects of this post, you could just jump to the end of it to view an example of different applications of the n-gram database.

Setting up MySQL

Get MySQL

First you need to install and setup MySQL on your system. I’m on Mac OS and it was really straightforward to get MySQL up and running. Here’s the documentation on how to do it on Mac OS.

Download the raw data

Go to http://books.google.com/ngrams/datasets and get the data files for Google 1-gram [highlight]files 0-9[/highlight]. After you’ve downloaded the files unzip them.

Import Google 1-gram into a MySQL database

Since I figured it would take a couple of hours to build the database I first combined all 10 files into one csv-file using cat in Terminal:

Since I’m not really well versed in working with MySQL I used a free GUI (Sequel Pro) to create and import the data. I setup my DB like this:

And imported newly created CSV-file into this structure. I figured it took about 8 hours to build it on my 2,4 GHz Core 2 Duo iMac from 2009, but I didn’t time it. The resulting database contained 470 million rows and landed at 24 GB using InnoDB indexing.

Querying MySQL from R

I’m using the RMySQL-package to get data from MySQL into R. I wrote a function that accepts search terms and fetches the matching results from my Google 1-gram database. I’ve masked my user and password, so you’ve got to change ‘user=”*”, password=”*”` to your own user name and password.

Optimization

MySQL is well optimized to handle OR statements, and it’s a lot faster to send all terms in the same query then to send new queries for each term. Consequently I needed a function that would write out my MySQL query combining the different search terms used. Like this:

Which I then put into the function that connects to MySQL. Using system.time() [highlight]I clocked the run time to about 15 minutes[/highlight] independent of how many search terms I used. I would say that’s pretty decent considering it’s ~470 million rows of data, that I’m hosting on an external FireWire 800 drive.

Cleaning up the data

The MySQL query created a data frame with “n_gram”, “year” and “match_count”. However, since the raw data is case sensitive there are a lot of duplicates with just different lower- and uppercase configurations. Therefore I created a function to combine all 1-grams regardless of letter casing. Google’s n-gram database is not perfect, so sometimes you fetch OCR-errors with your query. I had to add some code to get rid of those erroneous words otherwise tolower() would return an error and the script would stop.

Creating the final data frame

To create the final data frame I run CreateDf() for each query term and combine them into one data frame with ldply(). Lastly I import data containing total counts for each year, which allows me to calculate relative values for each n-gram.

Results

Without smoothing

This is what the raw data look like.
Ggplot2 of "women" and "men" from google 1-gram data sets. By Kristoffer Magnusson

With smoothing

Here I added a smoothing function and ran some more queries.

Women vs Men

Google N-gram "Women" & "Men" with a smoother. By Kristoffer Magnusson

Sweden vs Norway vs Denmark vs Finland

Google N-gram of "Sweden", "Norway", "Denmark" and "Finland". By Kristoffer Magnusson

Psychodynamic vs Psychoanalysis vs Psychoanalytic vs Psychotherapy

Google N-gram of "Psychodynamic", "Psychoanalysis", "Psychoanalytic" and "Psychotherapy". By Kristoffer Magnusson

Jesus vs Christ

Ggplot2 of "Jesus", "Christ" from google 1-gram data sets with a smoother. By Kristoffer Magnusson

Gay vs Lesbian vs Homosexual vs Heterosexual

Ggplot2 of "Gay", "Lesbian","Homosexual" and "Heterosexual" from google 1-gram data sets with a smoother. By Kristoffer Magnusson

Socialism vs Capitalism

Ggplot2 of "Socialism" and "Capitalism" from google 1-gram data sets with a smoother. By Kristoffer Magnusson

But isn’t this exactly like using Google Ngram Viewer except a lot sexier?

Well, yes, this is exactly like using Google Ngram Viewer except with sexier graphics. However, you could do much more with this data than with Google Ngram Viewer. One could, for instance, aggregate the data with another data set. For example I could combine “socialism” and “capitalism” with data about which US political party were in power at that time. If you have more computer power than I do you could work with 2-9-grams and generate much cooler data.

Ggplot2 of "Socialism" and "Capitalism" from google 1-gram data sets aggregated with data about US political parties. By Kristoffer Magnusson

Ggplot2 R code used here

Smoothed plots

I actually had to write a function to get direct.labels() to display annotations after the smoothed curve instead of after the line of the raw data.

Aggregated plot

I added party data using an example in the book ggplot2: elegant graphics for data analysis. Which I just added to the previous syntax already saved in PLOT


Written by Kristoffer Magnusson, a researcher in clinical psychology. You should follow him on Twitter and come hang out on the open science discord Git Gud Science.


Share:

Published April 12, 2012 (View on GitHub)

Buy Me A Coffee

A huge thanks to the 64 supporters who've bought me a 143 coffees!

SCCT/Psychology bought ☕☕☕ (3) coffees

Keep the visualizations coming!

@noelnguyen16 bought ☕ (1) coffee

Hi Kristoffer, many thanks for making all this great stuff available to the community!

Eran Barzilai bought ☕ (1) coffee

These visualizations are awesome! thank you for creating it

Someone bought ☕ (1) coffee

@elena_bolt bought ☕☕☕ (3) coffees

Thank you so much for your work, Kristoffer. I use your visualizations to explain concepts to my tutoring students and they are a huge help.

A random user bought ☕☕☕ (3) coffees

Thank you for making such useful and pretty tools. It not only helped me understand more about power, effect size, etc, but also made my quanti-method class more engaging and interesting. Thank you and wish you a great 2021!

@hertzpodcast bought ☕☕☕ (3) coffees

We've mentioned your work a few times on our podcast and we recently sent a poster to a listener as prize so we wanted to buy you a few coffees. Thanks for the great work that you do!Dan Quintana and James Heathers - Co-hosts of Everything Hertz 

Chris SG bought ☕ (1) coffee

Very nice.

@whlevine bought ☕☕ (2) coffees

Thank you so much for these amazing visualizations. They're a great teaching tool and the allow me to show students things that it would take me weeks or months to program myself.

Someone bought ☕☕ (2) coffees

Cameron Proctor bought ☕☕☕ (3) coffees

Used your vizualization in class today. Thanks!

@Daniel_Brad4d bought ☕☕☕☕☕ (5) coffees

Wonderful work!

Gray Church bought ☕ (1) coffee

Thank you for the visualizations. They are fun and informative.

eshulman@brocku.ca bought ☕☕☕ (3) coffees

My students love these visualizations and so do I! Thanks for helping me make stats more intuitive.

Qamar bought ☕ (1) coffee

Someone bought ☕☕☕ (3) coffees

Adrian Helgå Vestøl bought ☕☕☕ (3) coffees

David Loschelder bought ☕☕☕☕☕ (5) coffees

Terrific work. So very helpful. Thank you very much.

Tanya McGhee bought ☕ (1) coffee

@neilmeigh bought ☕☕☕☕☕ (5) coffees

I am so grateful for your page and can't thank you enough!  

@schultemi bought ☕ (1) coffee

@giladfeldman bought ☕☕☕☕☕ (5) coffees

Wonderful work, I use it every semester and it really helps the students (and me) understand things better. Keep going strong.

Dean Norris bought ☕☕☕☕☕ (5) coffees

@misteryosupjoo bought ☕☕☕ (3) coffees

For a high school teacher of psychology, I would be lost without your visualizations. The ability to interact and manipulate allows students to get it in a very sticky manner. Thank you!!!

Sal bought ☕☕☕☕☕ (5) coffees

Really super useful, especially for teaching. Thanks for this!

Neilo bought ☕ (1) coffee

Really helpful visualisations, thanks!

Chi bought ☕☕☕ (3) coffees

You Cohen's d post really helped me explaining the interpretation to people who don't know stats! Thank you!

Someone bought ☕☕☕ (3) coffees

You doing useful work !! thanks !!

@ArtisanalANN bought ☕☕☕ (3) coffees

Enjoy.

Someone bought ☕ (1) coffee

This is amazing stuff. Very slick. 

Someone bought ☕ (1) coffee

Sarko bought ☕ (1) coffee

Thanks so much for creating this! Really helpful for being able to explain effect size to a clinician I'm doing an analysis for. 

@jsholtes bought ☕☕☕ (3) coffees

Teaching stats to civil engineer undergrads (first time teaching for me, first time for most of them too) and grasping for some good explanations of hypothesis testing, power, and CI's. Love these interactive graphics!

@DominikaSlus bought ☕ (1) coffee

Thank you! This page is super useful. I'll spread the word. 

Someone bought ☕ (1) coffee

dde@paxis.org bought ☕☕☕☕☕ (5) coffees

Very helpful to helping teach teachers about the effects of the Good Behavior Game

@notawful bought ☕☕☕ (3) coffees

Thank you for using your stats and programming gifts in such a useful, generous manner. -Jess

Mateu Servera bought ☕☕☕ (3) coffees

A job that must have cost far more coffees than we can afford you ;-). Thank you.

Melinda Rice bought ☕ (1) coffee

Thank you so much for creating these tools! As we face the challenge of teaching statistical concepts online, this is an invaluable resource.

@tmoldwin bought ☕ (1) coffee

Fantastic resource. I think you would be well served to have one page indexing all your visualizations, that would make it more accessible for sharing as a common resource.

@cdrawn bought ☕☕☕ (3) coffees

Thank you! Such a great resource for teaching these concepts, especially CI, Power, correlation.

Someone bought ☕ (1) coffee

Fantastic Visualizations! Amazing way to to demonstrate how n/power/beta/alpha/effect size are all interrelated - especially for visual learners! Thank you for creating this?

@akreutzer82 bought ☕☕☕☕☕ (5) coffees

Amazing visualizations! Thank you!

@rdh_CLE bought ☕☕☕☕☕ (5) coffees

So good!

@jackferd bought ☕ (1) coffee

Incredible visualizations and the best power analysis software on R.

Julia bought ☕☕☕ (3) coffees

Fantastic work with the visualizations!

@felixthoemmes bought ☕☕☕ (3) coffees

@dalejbarr bought ☕☕☕ (3) coffees

Your work is amazing! I use your visualizations often in my teaching. Thank you. 

@notawful bought ☕☕ (2) coffees

Thank you for sharing your visualization skills with the rest of us! I use them frequently when teaching intro stats. 

Cameron Proctor bought ☕ (1) coffee

Great website!

Someone bought ☕ (1) coffee

Hanah Chapman bought ☕ (1) coffee

Thank you for this work!!

Someone bought ☕ (1) coffee

Jayme bought ☕ (1) coffee

Nice explanation and visual guide of Cohen's d

Bart Comly Boyce bought ☕ (1) coffee

thank you

Dr. Mitchell Earleywine bought ☕ (1) coffee

This site is superb!

Florent bought ☕ (1) coffee

Zampeta bought ☕ (1) coffee

thank you for sharing your work. 

Mila bought ☕ (1) coffee

Thank you for the website, made me smile AND smarter :O enjoy your coffee! :)

Deb bought ☕ (1) coffee

Struggling with statistics and your interactive diagram made me smile to see that someone cares enough about us strugglers to make a visual to help us out!😍 

Someone bought ☕ (1) coffee

@exerpsysing bought ☕ (1) coffee

Much thanks! Visualizations are key to my learning style! 

@PsychoMouse bought ☕☕☕ (3) coffees

Excellent!  Well done!  SOOOO Useful!😊 🐭 

Someone bought ☕ (1) coffee

Sponsors

You can sponsor my open source work using GitHub Sponsors and have your name shown here.

Backers ✨❤️

Questions & Comments

Please use GitHub Discussions for any questions related to this post, or open an issue on GitHub if you've found a bug or wan't to make a feature request.

Webmentions

There are no webmentions for this page

(Webmentions sent before 2021 will unfortunately not show up here.)

Archived Comments (10)

J
Jake 2019-07-13

I have been trying to get your code to run but am wondering if there is something I am doing wrong. It appears that I'm having a problem loading the google_n-gram_total_table.txt file. I get an error that the function is deprecated and I have not been able to figure out an alternative to load the database and use it. Any thoughts or ways the code can be modified to work again? Thanks!

N
Nothing good, just me trying 2019-03-19

Amazing!

H
Hassan Monfared 2014-11-30

Grate work

R
Ron Wates 2013-07-12

Hi Kris,

Thanks for your R/MySQL advice. It answered a question that I'd been struggling with for some time.

Thanks,

Ron Wates.

A
Andrew Breidenbach 2012-10-12

Hi Kris,

I am trying to recreate this experiment (as part of my thesis) on my 2.6 GHz Thinkpad which runs Windows. I've pretty much followed your instructions so far, albeit I've used the command line to set up the DB and the table instead of a GUI. I'm trying to load the giant .csv file directly through the command line and I'm unsure if it's working or not. I actually did try to use a GUI at one point, HeidiSQL, to import the .csv but it froze/stopped responding once it realized how large the file was. What was this process like for you? Am I not being patient enough? Also, do you have any tips on how to get RMySQL to work for Windows? This data is so darn cool but the size of the files makes working with them unwieldy!

Kristoffer Magnusson 2012-10-12

Hi Andrew! If I remember correctly it took about 8 hours to import to 1-gram csv-file into MySQL (I'm guessing that's dependent on what indexing that's being used). After I finished this article I began working with the 2-gram data files, but my computer crashed after the import had been loading for 8 days, so I gave it up. So some patience is needed :). Unfortunately, I've never worked with RMySQL on Windows so I can't really help you there.

Hope this helps!

T
Tobias 2012-06-08

Thanks for the tutorial, but maybe you could set the length of the Year, Match_count, Page_count, Volume_Count to a shorter value than 11, that needs less storage especially with a big table.

N
Neil Murphy 2012-05-09

Excellent article, it inspires me to use some of your ideas especially as I am just starting tio research natural language propcessing, information retrieval etc.

best wishes and thanks

T
Tal Galili 2012-04-14

Fantastic post Kris!

Kristoffer Magnusson 2012-04-14

Thank you!