How to download complete XML records from PubMed and extract data

How to download complete XML records from PubMed and extract data. By Kristoffer Magnusson

Introduction

My first PubMed script (An R Script to Automatically download PubMed Citation Counts By Year of Publication) extracted yearly counts for any number of search strings, by using PubMed’s E-utilities. Specifically, it’s using the esearch-function, which will report the number of hits for your search and/or the articles PMIDs. This method is very reliable and fast if you’re only interested in the number of hits for a given query. However, PubMed’s E-utilities have a lot more features than that, some of which I will use in this article to download complete article records in XML.

How it works

What’s cool about esearch is that you can tell it to save a history of the articles found by your query, and then use another function called efetch to download that history. This is done by adding &usehistory=y to your search, which will generate this XML (in addition to some other XML-tags):

Once we have extracted the WebEnv string, we just tell PubMed’s efetch to send us the articles saved in WebEnv. There’s one complication, though. PubMed “only” allows us to fetch 10 000 articles in one go, therefore my code includes a loop that will batch download the data, and paste it together in order to create valid XML-code. The XML cutting and pasting is done with gsub, since the unparsed XML-data is just a long string. It’s not the most beautiful solution, but it seems to work.

Now that all XML-data is saved in one object, we just need to parse it an extract whatever PubMed field(s) we’re interested in. I’ve included a function that will parse the XML-code and extract journal counts, although you could use the same method to extract any field.

One example run: Top 20 CBT journals in 2010, 2011 and all time

Top 20 Cognitive Behavior Therapy journals by PubMed citation count. By Kristoffer Magnusson

These two graphs were created by using the following 3 queries (notice that I use single-quotes inside my query). This script does not have the functionality to download different queries automatically for you, so I ran my three searches individually. The R code for searchPubmed() and extractJournal() are at the end of this article.

Reshaping the data and creating the plots

I needed to reshape my data a bit, and combine it into one object, before I used ggplot2 to make the graphs. I did it like this:

Ggplot2 code

Now that I have all my top 20 data in one object in the long format, the ggplot2 code is pretty simple.

Reliability of the method

To check the reliability of my method I compared the number of extracted journals to the total number of hits. These are the numbers:

2010: 1487 / 1488 = 0.999328
2011: 1488 / 1493 = 0.996651
All years: 14345 / 14354 = 0.999373

Since the error is so low, I didn’t bother to check why some journals were left out. My guess is, that they were missing in the original data as well.

The R code for searchPubmed() and extractJournal()

Update 2013-02-23: The script broke when the date in doctype-declaration was changed from 2012 to 2013. I've updated the code, and it should be working now.

Update 2013-08-17: Moved the script to github and fixed the broken batch procedure. It should be more stable now.

Click here to get the source code


Written by Kristoffer Magnusson, a researcher in clinical psychology. You should follow him on Twitter and come hang out on the open science discord Git Gud Science.


Share:

Published April 27, 2012 (View on GitHub)

Buy Me A Coffee

A huge thanks to the 152 supporters who've bought me a 361 coffees!

Jason Rinaldo bought ☕☕☕☕☕☕☕☕☕☕ (10) coffees

I've been looking for applets that show this for YEARS, for demonstrations for classes. Thank you so much! Students do not need to tolarate my whiteboard scrawl now. I'm sure they'd appreciate you, too.l

JDMM bought ☕☕☕☕☕ (5) coffees

You finally helped me understand correlation! Many, many thanks... 😄

@VicCazares bought ☕☕☕☕☕ (5) coffees

Good stuff! It's been so helpful for teaching a Psych Stats class. Cheers!

Dustin M. Burt bought ☕☕☕☕☕ (5) coffees

Excellent and informative visualizations!

Someone bought ☕☕☕☕☕ (5) coffees

@metzpsych bought ☕☕☕☕☕ (5) coffees

Always the clearest, loveliest simulations for complex concepts. Amazing resource for teaching intro stats!

Ryo bought ☕☕☕☕☕ (5) coffees

For a couple years now I've been wanting to create visualizations like these as a way to commit these foundational concepts to memory. But after finding your website I'm both relieved that I don't have to do that now and pissed off that I couldn't create anything half as beautiful and informative as you have done here. Wonderful job.

Diarmuid Harvey bought ☕☕☕☕☕ (5) coffees

You have an extremely useful site with very accessible content that I have been using to introduce colleagues and students to some of the core concepts of statistics. Keep up the good work, and thanks!

Michael Hansen bought ☕☕☕☕☕ (5) coffees

Keep up the good work!

Michael Villanueva bought ☕☕☕☕☕ (5) coffees

I wish I could learn more from you about stats and math -- you use language in places that I do not understand. Cohen's D visualizations opened my understanding. Thank you

Someone bought ☕☕☕☕☕ (5) coffees

Thank you, Kristoffer

Pål from Norway bought ☕☕☕☕☕ (5) coffees

Great webpage, I use it to illustrate several issues when I have a lecture in research methods. Thanks, it is really helpful for the students:)

@MAgrochao bought ☕☕☕☕☕ (5) coffees

Joseph Bulbulia bought ☕☕☕☕☕ (5) coffees

Hard to overstate the importance of this work Kristoffer. Grateful for all you are doing.

@TDmyersMT bought ☕☕☕☕☕ (5) coffees

Some really useful simulations, great teaching resources.

@lakens bought ☕☕☕☕☕ (5) coffees

Thanks for fixing the bug yesterday!

@LinneaGandhi bought ☕☕☕☕☕ (5) coffees

This is awesome! Thank you for creating these. Definitely using for my students, and me! :-)

@ICH8412 bought ☕☕☕☕☕ (5) coffees

very useful for my students I guess

@KelvinEJones bought ☕☕☕☕☕ (5) coffees

Preparing my Master's student for final oral exam and stumbled on your site. We are discussing in lab meeting today. Coffee for everyone.

Someone bought ☕☕☕☕☕ (5) coffees

What a great site

@Daniel_Brad4d bought ☕☕☕☕☕ (5) coffees

Wonderful work!

David Loschelder bought ☕☕☕☕☕ (5) coffees

Terrific work. So very helpful. Thank you very much.

@neilmeigh bought ☕☕☕☕☕ (5) coffees

I am so grateful for your page and can't thank you enough!  

@giladfeldman bought ☕☕☕☕☕ (5) coffees

Wonderful work, I use it every semester and it really helps the students (and me) understand things better. Keep going strong.

Dean Norris bought ☕☕☕☕☕ (5) coffees

Sal bought ☕☕☕☕☕ (5) coffees

Really super useful, especially for teaching. Thanks for this!

dde@paxis.org bought ☕☕☕☕☕ (5) coffees

Very helpful to helping teach teachers about the effects of the Good Behavior Game

@akreutzer82 bought ☕☕☕☕☕ (5) coffees

Amazing visualizations! Thank you!

@rdh_CLE bought ☕☕☕☕☕ (5) coffees

So good!

Amanda Sharples bought ☕☕☕ (3) coffees

Soyol bought ☕☕☕ (3) coffees

Someone bought ☕☕☕ (3) coffees

Kenneth Nilsson bought ☕☕☕ (3) coffees

Keep up the splendid work!

@jeremywilmer bought ☕☕☕ (3) coffees

Love this website; use it all the time in my teaching and research.

Someone bought ☕☕☕ (3) coffees

Powerlmm was really helpful, and I appreciate your time in putting such an amazing resource together!

DR AMANDA C DE C WILLIAMS bought ☕☕☕ (3) coffees

This is very helpful, for my work and for teaching and supervising

Georgios Halkias bought ☕☕☕ (3) coffees

Regina bought ☕☕☕ (3) coffees

Love your visualizations!

Susan Evans bought ☕☕☕ (3) coffees

Thanks. I really love the simplicity of your sliders. Thanks!!

@MichaMarie8 bought ☕☕☕ (3) coffees

Thanks for making this Interpreting Correlations: Interactive Visualizations site - it's definitely a great help for this psych student! 😃

Zakaria Giunashvili, from Georgia bought ☕☕☕ (3) coffees

brilliant simulations that can be effectively used in training

Someone bought ☕☕☕ (3) coffees

@PhysioSven bought ☕☕☕ (3) coffees

Amazing illustrations, there is not enough coffee in the world for enthusiasts like you! Thanks!

Cheryl@CurtinUniAus bought ☕☕☕ (3) coffees

🌟What a great contribution - thanks Kristoffer!

vanessa moran bought ☕☕☕ (3) coffees

Wow - your website is fantastic, thank you for making it.

Someone bought ☕☕☕ (3) coffees

mikhail.saltychev@gmail.com bought ☕☕☕ (3) coffees

Thank you Kristoffer This is a nice site, which I have been used for a while. Best Prof. Mikhail Saltychev (Turku University, Finland)

Someone bought ☕☕☕ (3) coffees

Ruslan Klymentiev bought ☕☕☕ (3) coffees

@lkizbok bought ☕☕☕ (3) coffees

Keep up the nice work, thank you!

@TELLlab bought ☕☕☕ (3) coffees

Thanks - this will help me to teach tomorrow!

SCCT/Psychology bought ☕☕☕ (3) coffees

Keep the visualizations coming!

@elena_bolt bought ☕☕☕ (3) coffees

Thank you so much for your work, Kristoffer. I use your visualizations to explain concepts to my tutoring students and they are a huge help.

A random user bought ☕☕☕ (3) coffees

Thank you for making such useful and pretty tools. It not only helped me understand more about power, effect size, etc, but also made my quanti-method class more engaging and interesting. Thank you and wish you a great 2021!

@hertzpodcast bought ☕☕☕ (3) coffees

We've mentioned your work a few times on our podcast and we recently sent a poster to a listener as prize so we wanted to buy you a few coffees. Thanks for the great work that you do!Dan Quintana and James Heathers - Co-hosts of Everything Hertz 

Cameron Proctor bought ☕☕☕ (3) coffees

Used your vizualization in class today. Thanks!

eshulman@brocku.ca bought ☕☕☕ (3) coffees

My students love these visualizations and so do I! Thanks for helping me make stats more intuitive.

Someone bought ☕☕☕ (3) coffees

Adrian Helgå Vestøl bought ☕☕☕ (3) coffees

@misteryosupjoo bought ☕☕☕ (3) coffees

For a high school teacher of psychology, I would be lost without your visualizations. The ability to interact and manipulate allows students to get it in a very sticky manner. Thank you!!!

Chi bought ☕☕☕ (3) coffees

You Cohen's d post really helped me explaining the interpretation to people who don't know stats! Thank you!

Someone bought ☕☕☕ (3) coffees

You doing useful work !! thanks !!

@ArtisanalANN bought ☕☕☕ (3) coffees

Enjoy.

@jsholtes bought ☕☕☕ (3) coffees

Teaching stats to civil engineer undergrads (first time teaching for me, first time for most of them too) and grasping for some good explanations of hypothesis testing, power, and CI's. Love these interactive graphics!

@notawful bought ☕☕☕ (3) coffees

Thank you for using your stats and programming gifts in such a useful, generous manner. -Jess

Mateu Servera bought ☕☕☕ (3) coffees

A job that must have cost far more coffees than we can afford you ;-). Thank you.

@cdrawn bought ☕☕☕ (3) coffees

Thank you! Such a great resource for teaching these concepts, especially CI, Power, correlation.

Julia bought ☕☕☕ (3) coffees

Fantastic work with the visualizations!

@felixthoemmes bought ☕☕☕ (3) coffees

@dalejbarr bought ☕☕☕ (3) coffees

Your work is amazing! I use your visualizations often in my teaching. Thank you. 

@PsychoMouse bought ☕☕☕ (3) coffees

Excellent!  Well done!  SOOOO Useful!😊 🐭 

Dan Sanes bought ☕☕ (2) coffees

this is a superb, intuitive teaching tool!

@whlevine bought ☕☕ (2) coffees

Thank you so much for these amazing visualizations. They're a great teaching tool and the allow me to show students things that it would take me weeks or months to program myself.

Someone bought ☕☕ (2) coffees

@notawful bought ☕☕ (2) coffees

Thank you for sharing your visualization skills with the rest of us! I use them frequently when teaching intro stats. 

Andrew J O'Neill bought ☕ (1) coffee

Thanks for helping understand stuff!

Someone bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Shawn Hemelstrand bought ☕ (1) coffee

Thank you for this great visual. I use it all the time to demonstrate Cohen's d and why mean differences affect it's approximation.

Adele Fowler-Davis bought ☕ (1) coffee

Thank you so much for your excellent post on longitudinal models. Keep up the good work!

Stewart bought ☕ (1) coffee

This tool is awesome!

Someone bought ☕ (1) coffee

Aidan Nelson bought ☕ (1) coffee

Such an awesome page, Thank you

Someone bought ☕ (1) coffee

Ellen Kearns bought ☕ (1) coffee

Dr Nazam Hussain bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Eva bought ☕ (1) coffee

I've been learning about power analysis and effect sizes (trying to decide on effect sizes for my planned study to calculate sample size) and your Cohen's d interactive tool is incredibly useful for understanding the implications of different effect sizes!

Someone bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Thanks a lot!

Someone bought ☕ (1) coffee

Reena Murmu Nielsen bought ☕ (1) coffee

Tony Andrea bought ☕ (1) coffee

Thanks mate

Tzao bought ☕ (1) coffee

Thank you, this really helps as I am a stats idiot :)

Melanie Pflaum bought ☕ (1) coffee

Sacha Elms bought ☕ (1) coffee

Yihan Xu bought ☕ (1) coffee

Really appreciate your good work!

@stevenleung bought ☕ (1) coffee

Your visualizations really help me understand the math.

Junhan Chen bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Michael Hansen bought ☕ (1) coffee

ALEXANDER VIETHEER bought ☕ (1) coffee

mather bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Bastian Jaeger bought ☕ (1) coffee

Thanks for making the poster designs OA, I just hung two in my office and they look great!

@ValerioVillani bought ☕ (1) coffee

Thanks for your work.

Someone bought ☕ (1) coffee

Great work!

@YashvinSeetahul bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Angela bought ☕ (1) coffee

Thank you for building such excellent ways to convey difficult topics to students!

@inthelabagain bought ☕ (1) coffee

Really wonderful visuals, and such a fantastic and effective teaching tool. So many thanks!

Someone bought ☕ (1) coffee

Someone bought ☕ (1) coffee

Yashashree Panda bought ☕ (1) coffee

I really like your work.

Ben bought ☕ (1) coffee

You're awesome. I have students in my intro stats class say, "I get it now," after using your tool. Thanks for making my job easier.

Gabriel Recchia bought ☕ (1) coffee

Incredibly useful tool!

Shiseida Sade Kelly Aponte bought ☕ (1) coffee

Thanks for the assistance for RSCH 8210.

@Benedikt_Hell bought ☕ (1) coffee

Great tools! Thank you very much!

Amalia Alvarez bought ☕ (1) coffee

@noelnguyen16 bought ☕ (1) coffee

Hi Kristoffer, many thanks for making all this great stuff available to the community!

Eran Barzilai bought ☕ (1) coffee

These visualizations are awesome! thank you for creating it

Someone bought ☕ (1) coffee

Chris SG bought ☕ (1) coffee

Very nice.

Gray Church bought ☕ (1) coffee

Thank you for the visualizations. They are fun and informative.

Qamar bought ☕ (1) coffee

Tanya McGhee bought ☕ (1) coffee

@schultemi bought ☕ (1) coffee

Neilo bought ☕ (1) coffee

Really helpful visualisations, thanks!

Someone bought ☕ (1) coffee

This is amazing stuff. Very slick. 

Someone bought ☕ (1) coffee

Sarko bought ☕ (1) coffee

Thanks so much for creating this! Really helpful for being able to explain effect size to a clinician I'm doing an analysis for. 

@DominikaSlus bought ☕ (1) coffee

Thank you! This page is super useful. I'll spread the word. 

Someone bought ☕ (1) coffee

Melinda Rice bought ☕ (1) coffee

Thank you so much for creating these tools! As we face the challenge of teaching statistical concepts online, this is an invaluable resource.

@tmoldwin bought ☕ (1) coffee

Fantastic resource. I think you would be well served to have one page indexing all your visualizations, that would make it more accessible for sharing as a common resource.

Someone bought ☕ (1) coffee

Fantastic Visualizations! Amazing way to to demonstrate how n/power/beta/alpha/effect size are all interrelated - especially for visual learners! Thank you for creating this?

@jackferd bought ☕ (1) coffee

Incredible visualizations and the best power analysis software on R.

Cameron Proctor bought ☕ (1) coffee

Great website!

Someone bought ☕ (1) coffee

Hanah Chapman bought ☕ (1) coffee

Thank you for this work!!

Someone bought ☕ (1) coffee

Jayme bought ☕ (1) coffee

Nice explanation and visual guide of Cohen's d

Bart Comly Boyce bought ☕ (1) coffee

thank you

Dr. Mitchell Earleywine bought ☕ (1) coffee

This site is superb!

Florent bought ☕ (1) coffee

Zampeta bought ☕ (1) coffee

thank you for sharing your work. 

Mila bought ☕ (1) coffee

Thank you for the website, made me smile AND smarter :O enjoy your coffee! :)

Deb bought ☕ (1) coffee

Struggling with statistics and your interactive diagram made me smile to see that someone cares enough about us strugglers to make a visual to help us out!😍 

Someone bought ☕ (1) coffee

@exerpsysing bought ☕ (1) coffee

Much thanks! Visualizations are key to my learning style! 

Someone bought ☕ (1) coffee

Sponsors

You can sponsor my open source work using GitHub Sponsors and have your name shown here.

Backers ✨❤️

Questions & Comments

Please use GitHub Discussions for any questions related to this post, or open an issue on GitHub if you've found a bug or wan't to make a feature request.

Webmentions

There are no webmentions for this page

(Webmentions sent before 2021 will unfortunately not show up here.)

Archived Comments (19)

M
michel_moser 2016-10-22

Hi,
Great functions and code.
I am trying to run them but get some parsing errors saying the xml file which gets downlaoded is malformated (or at least thats how i interpret the error messages).
Could anyone help me with this?

code:

pubmed_count("Petunia AND genome")
pubmed_get("Petunia AND genome", file = "petunia")

pubmed_timeline("pubmed_petunia", regex = "plos")

error:

error parsing attribute name
attributes construct error
Couldn't find end of Start Tag U line 144
error parsing attribute name
attributes construct error
Couldn't find end of Start Tag U line 144
error parsing attribute name
attributes construct error
Couldn't find end of Start Tag U line 1085
error parsing attribute name
attributes construct error
Couldn't find end of Start Tag U line 2008
error parsing attribute name
attributes construct error
..... (skipped some lines)
Error: 1: error parsing attribute name
2: attributes construct error
3: Couldn't find end of Start Tag U line 144
4: error parsing attribute name
5: attributes construct error
6: Couldn't find end of Start Tag U line 144
7: error parsing attribute name
8: attributes construct error
9: Couldn't find end of Start Tag U line 1085
10: error parsing attribute name
11: attributes construct error
12: Couldn't find end of Start Tag U line 2008
13: error parsing attribute name
14: attributes construct error
15: Couldn't find end of Start Tag U line 2008
16: error parsing attribute name
17: attributes construct error
18: Couldn't find end of Start Tag U line 6721
19: error parsing attribute name
20: attributes construct error
21: Couldn't find end of Start Tag U line 6722
22: error parsing attribute name
23: attributes construct error
24: Couldn't find end of Start Tag U line 6722
25: error parsing attribute name
26: attributes construct error
27: Couldn't find end of Start Tag U line 17716
28:
Called from: (function (msg, ...)
{
if (length(grep("\\\n$", msg)) == 0)
paste(msg, "\n", sep = "")
if (immediate)
cat(msg)
if (length(msg) == 0) {
e = simpleError(paste(1:length(messages), messages, sep = ": ",
collapse = ""))
class(e) = c(class, class(e))
stop(e)
}
messages <<- c(messages, msg)
})(character(0))
Browse[1]> Q

Thank you very much!
michel

J
John Nicholas 2014-02-05

Hi,
Is it possible to do a search on a subject, and then parse the papers by author? I would like to create a rank order of authors by the number of papers they published. In other words, who are the most prominent authors on a particular subject?

T
Tobi 2013-12-10

For some reason the xml this generates doesn't export well. The code I'm using is below. Please explain what I'm doing wrong. nothing I'm doing to pub.efetch will turn it into an actual data frame. I think that's the root of the problem. Thanks!

2012=ldply(pub.efetch, data.frame)
write.xml(2012, file="Mazumdar2012.xml")

F
Fr. 2013-07-20

Hm, code got garbled. Here it is.

F
Fr. 2013-07-20

I am afraid the code stills breaks at extractJournal(). This is what I get:

> # Get data for 2011
> query pub.efetch cbt_2011 traceback()
5: stop(e)
4: (function (msg, ...)
{
if (length(grep("\\\n$", msg)) == 0)
paste(msg, "\n", sep = "")
if (immediate)
cat(msg)
if (length(msg) == 0) {
e = simpleError(paste(1:length(messages), messages, sep = ": ",
collapse = ""))
class(e) = c(class, class(e))
stop(e)
}
messages <<- c(messages, msg)
})(character(0))
3: .Call("RS_XML_ParseTree", as.character(file), handlers, as.logical(ignoreBlanks),
as.logical(replaceEntities), as.logical(asText), as.logical(trim),
as.logical(validate), as.logical(getDTD), as.logical(isURL),
as.logical(addAttributeNamespaces), as.logical(useInternalNodes),
as.logical(isHTML), as.logical(isSchema), as.logical(fullNamespaceInfo),
as.character(encoding), as.logical(useDotNames), xinclude,
error, addFinalizer, as.integer(options), PACKAGE = "XML")
2: xmlTreeParse(pub.efetch, useInternalNodes = TRUE) at pubmed.R#64
1: extractJournal()

Kristoffer Magnusson 2013-08-17

Thanks for posting the error, it's been fixed now. Sorry for the delay.

Fr. 2013-08-17

Wonderful, will try later today, thanks!

Fr. 2013-08-19

Works perfectly.

Fr. 2014-06-14

There was an issue with the value of RetMax in your code (it should be fixed instead of incremented). I fixed it and remixed the rest of the code to get a bunch of easy-to-use functions: see the pull request on GitHub. Hope you like it!

Kristoffer Magnusson 2014-06-16

Only had time to do a quick glance at your remix, but it look's great. Will look closer at as soon as possible and merge it :) Thanks!

Fr. 2014-06-17

Take your time, no rush at all. I've tried to lay some foundations for what could be a package organised around some core functions (such as pubmed_get to batch download records and pubmed_net to build coauthorship networks). This piece of software could be useful to have in R, so if you're interested, we can work on that. Let me know at the time of your choosing :)

J
Julien 2013-02-21

Hi,

Thank you for sharing those functions.

However

xml.data <- xmlTreeParse(pub.efetch, useInternalNodes = TRUE)

give me this error :

XML declaration allowed only at the start of the document
StartTag: invalid element name

Do you know how to fix it ?

regards,

Julien

Kristoffer Magnusson 2013-08-17

It should be working now.

R
Rad 2012-12-22

Hi Kristoffer,

I found a bug in your script may be a misconfiguration on my side but I have all needed library installed and I use the latest R version

Here is the bug, after using cbt_2011 <- extractJournal() I get this error message

Error: 1: XML declaration allowed only at the start of the document
2: StartTag: invalid element name

Any idea ?

Thanks for your script and for sharing it, very helpful

Rad

Sugam Khetrapal 2015-11-20

Hi Rad,

Were you able to resolve this issue.

We are getting the same error message and do not know how to proceed.

Thanks,
Sugam

S
Scott Chamberlain 2012-05-28

Where does the function 'count' come from? Which package is it in? (see lilne 70 of your last script block)

Thanks, Scott Chamberlain

Kristoffer Magnusson 2012-05-28

Hi Scott,

'count' is from the 'plyr'-package.

M
Matthias 2012-04-30

Hey Kristoffer,

thank you for this inspiring code!
My R does not know the function count in line 69, but

journal <- data.frame(freq=sort(table(journal)))
journal$x <- factor(rownames(journal))

will do as replacement.

Regards, Matthias

Kristoffer Magnusson 2012-04-30

Matthias,

I'd missed this line "require("plyr")", now 'count' should be working. Thanks for commenting!