Month: April 2016

Fetching US census data with R: a short tutorial

In this short, self-contained post, I show how to download and tidy a cool data set of US census data. The post is meant to be the first of a series showcasing data scraping, wrangling, visualization and modelling.

library(acs)
library(magrittr)
library(dplyr)

Step 1: Get your free api key

Here’s the link
Install your api key:

api.key.install('your-api-key-here')

Step 2: Select a survey and a table to explore.

There is an excellent variety of data you can play with !
Here’s the link

Step 3: Fetch the data.

In the example below, I’ve chosen to explore the table number B15001 in the 2010-2014 American Community Survey 5-Year Estimates, which contains data related to educational level, by age and gender.

First you need to call the geo.make function. The call below simply means “give me all states and counties”

county_geo <- geo.make(state = '*', county = '*')

Then we actually get the data. It’s important to set the col.names parameter to “pretty” to get understandable variable names.

education_acs <- acs.fetch(geography = county_geo, table.number = 'B15001', 
          col.names = 'pretty', endyear = 2014, span = 5)

What you have at this point is an acs object.
Two slots are of particular interest to us. First, the @geography slot which contains the state/county name in the $NAME attribute.
Let’s go ahead and extract those right away:

state_county <- education_acs@geography$NAME %>% str_split(",")

county <- state_county  %>%
            sapply(`[[`, 1) %>%
            str_replace(' County', '')

state <-  state_county %>%
            sapply(`[[`, 2)

And the @estimate slot which contains the actual census values.
The @estimate slot is actually a matrix:

str(education_acs@estimate)
##  num [1:3220, 1:83] 40922 148019 21257 17750 43768 ...
##  - attr(*, 'dimnames')=List of 2
##   ..$ : chr [1:3220] 'Autauga County, Alabama' 'Baldwin County, Alabama' 'Barbour County, Alabama' 'Bibb County, Alabama' ...
##   ..$ : chr [1:83] 'SEX BY AGE BY EDUCATIONAL ATTAINMENT FOR THE POPULATION 18 YEARS AND OVER: Total:' 'SEX BY AGE BY EDUCATIONAL ATTAINMENT FOR THE POPULATION 18 YEARS AND OVER: Male:' 'SEX BY AGE BY EDUCATIONAL ATTAINMENT FOR THE POPULATION 18 YEARS AND OVER: Male: 18 to 24 years:' 'SEX BY AGE BY EDUCATIONAL ATTAINMENT FOR THE POPULATION 18 YEARS AND OVER: Male: 18 to 24 years: Less than 9th grade' ...

Step 4: Tidy your data

As you can see, there is a separate column for every level of drilldown.
So We have a little bit of work in order to get a tidy dataset. Come on then, let’s get to it !

In the end, what we really want, is a dataframe in long format, with the state and county variables, then one variable for the education level, one for the age group, one for the gender, and finally the census value.

Because there are so many columns, and also because in R we trust,
there is no way in hell we are going to do this manually.

You can check the code below if you’re curious, suffice it to say that the expand.grid base function was super useful !

df_education = NULL

education <- c('Total', '< 9th grade', '< 12th grade', 'High school', 'College', 'Associate', 'Bachelor', 'Graduate')
age <- c('18-24', '25-34', '35-44', '45-64', '65+')
sex = c('Male', 'Female')

columns = c(3:42, 44:83)
df_exp <- expand.grid(education=education, age=age, sex=sex)

for(i in 1:length(columns)){
  df_education <- 
    rbind(df_education, 
        data.frame(
            county=county, 
                state=state,
                sex=(df_exp$sex)[i], 
                age=(df_exp$age)[i],                                                education=(df_exp$education)[i],                                                            value=education_acs@estimate[,columns[i]])
        )
}

I had to include the ‘Total’ level of education in the loop because of the way the columns of the matrix are enumerated, but I don’t actually want to keep it:

df_education %>% filter(education != 'Total')

And guess what my friends…we now have a freakin cool dataset with 5 variables, 1 value and 225400 observations:

head(df_education)
##    county    state  sex   age   education value
## 1 Autauga  Alabama Male 18-24 < 9th grade    36
## 2 Baldwin  Alabama Male 18-24 < 9th grade   172
## 3 Barbour  Alabama Male 18-24 < 9th grade   123
## 4    Bibb  Alabama Male 18-24 < 9th grade    68
## 5  Blount  Alabama Male 18-24 < 9th grade    39
## 6 Bullock  Alabama Male 18-24 < 9th grade     0

That’s it for today ! We’re gonna keep it short & sweet. Our data is ready, we can take a break and come back later to play with it.

In the meantime, if you want to get a headstart, or have any suggestions, please feel free to leave a comment.

Next time, we will be doing some vizualisations using this dataset, and maybe we’ll do some webscraping and merge interesting information onto it.

Come back soon !

The full, reproductible and ready to spin code can be found here.
Do remember to install your api key first !

USA – Ethnicity & Income

Following up on my previous post, and inspired by datausa.io, I hit me that I had a pretty cool dataset I could explore, thanks to the chloroplethr package.

The dataset I’m referring to comes from the US 2013 Census data. Let’s take a look:

require(choroplethr)
require(dplyr)
require(magrittr)
require(ggplot2)
api.key.install('your-api-key-here')
demographics=get_county_demographics()
head(demographics)
##   region total_population percent_white percent_black percent_asian
## 1   1001            54907            76            18             1
## 2   1003           187114            83             9             1
## 3   1005            27321            46            46             0
## 4   1007            22754            75            22             0
## 5   1009            57623            88             1             0
## 6   1011            10746            22            71             0
##   percent_hispanic per_capita_income median_rent median_age
## 1                2             24571         668       37.5
## 2                4             26766         693       41.5
## 3                5             16829         382       38.3
## 4                2             17427         351       39.4
## 5                8             20730         403       39.6
## 6                6             18628         276       39.6

One thing in particular I was curious about was this: how can I show on a map, how much a particular group of the population (e.g. the black population) is hit by poverty.

We do have a percentage of black population as well as a per-capita income for each county. Let us merge this data on the regions data from the chloroplethr package:

county_data  <- county.regions %>% left_join(demographics)
## Joining by: 'region'
head(county_data)
##   region county.fips.character county.name state.name state.fips.character
## 1   1001                 01001     autauga    alabama                   01
## 2   1003                 01003     baldwin    alabama                   01
## 3   1005                 01005     barbour    alabama                   01
## 4   1007                 01007        bibb    alabama                   01
## 5   1009                 01009      blount    alabama                   01
## 6   1011                 01011     bullock    alabama                   01
##   state.abb total_population percent_white percent_black percent_asian
## 1        AL            54907            76            18             1
## 2        AL           187114            83             9             1
## 3        AL            27321            46            46             0
## 4        AL            22754            75            22             0
## 5        AL            57623            88             1             0
## 6        AL            10746            22            71             0
##   percent_hispanic per_capita_income median_rent median_age
## 1                2             24571         668       37.5
## 2                4             26766         693       41.5
## 3                5             16829         382       38.3
## 4                2             17427         351       39.4
## 5                8             20730         403       39.6
## 6                6             18628         276       39.6

The idea will be to rely on the relatively unknown yet uber-cool harmonic mean. This mean has the property that it is high when both components (in my case I will only have two) are high.

Here’s the formula for the harmonic mean of two numbers:

harmonic_mean = function(x, y){
  (2*x*y)/(x+y)
}

So I will be using the harmonic mean of the percentage of black population and the per-capita income. But to make those measures play nice together, let us first transform then into ranks and map that rank to the [0,1] interval:

county_data$percent_black_ranked <- rank(county_data$percent_black, ties.method='first')
county_data$percent_black_ranked <- county_data$percent_black_ranked / max(county_data$percent_black_ranked)

county_data$per_capita_income_ranked  <- rank(county_data$per_capita_income, ties.method='first')
county_data$per_capita_income_ranked <- 1 - county_data$per_capita_income_ranked / max(county_data$per_capita_income_ranked)

Notice that I took the complement of per-capita income rank to show poverty.

We are now ready to create our map:

county_data$value  <- harmonic_mean(county_data$percent_black_ranked, county_data$per_capita_income_ranked)

county_choropleth(county_data) + scale_fill_brewer(palette=3) + ggtitle('Map of (black % of population &amp;amp; poverty) index')

unnamed-chunk-8-1

The dark areas are where there is a both a relatively high percentage of black population and a low per-capita income.

The same exercise with the hispanic population yields:

unnamed-chunk-9-1

Out of curiosity, let us look at white % of population and wealth:

county_data$percent_white_ranked <- rank(county_data$percent_white, ties.method='first')
county_data$percent_white_ranked <- county_data$percent_white_ranked / max(county_data$percent_white_ranked)

county_data$per_capita_income_ranked  <- rank(county_data$per_capita_income, ties.method='first')
county_data$per_capita_income_ranked <- county_data$per_capita_income_ranked / max(county_data$per_capita_income_ranked)

unnamed-chunk-11-1

us 2016 elections

This post is my contribution to the challenge set up by Ari Lamstein. I wish I had more time to spend on it, but it did at least give me the opportunity to learn to use dplyr, reshape2, ggplot, Shiny, and the choroplethr package. I’m also making heavy use of rvest, which I have explored already in some of my previous posts.

This post is divided in three sections. First, using ggplot, we chart a time view of the race, for both parties. Then we use the choroplethr package to try and display the performance of the candidates in relation to a demographic variable. Finally, a simple Shiny app provides us another way to visualize and slice through the data.

Excited ? Let’s get started !

Section 1: Time view of the race

I’ve decided not to explicitely show the code in this post, but if you are interested, you can find the complete code for this section here. Should you have any questions or comments, don’t hesitate to leave them below.

My data is scraped from: http://www.politico.com/2016-election/results/map/president. At the time of writing of this post, the website contained election results up to April 9th.

So, here’s how the race is shaping up so far for the republican party:

republican

The grey solid horizontal line is the finish line: it’s the number of pledged delegates a candidate must have in order to be declared the winner (in theory – that is, assuming the pledged candidates don’t change their minds !). The dashed grey line represents 50% of the running total number of delegates: it’s a way to quickly visualize how well the leading candidate (in this case Trump) is doing compared to where he needs to be come the end of the race.

An important observation here is that the red line is below the dashed grey line, which means that even though he is way ahead of his main rival, Trump still needs to pick up the pace if he wants a guaranteed win. And it is a secret for noone that if he doesn’t get the guaranteed win, he is toast. Which is why you’ll often hear Trump say that Kasich should quit, because he actually needs those votes.

Compare this to the picture in the democratic party:

democratic.png

Here you can see that not only does Clinton have a comfortable lead over her opponent, she is also well poised to have the majority by the end of the race, if the trend continues.

Section 2: Map view

We now turn our attention to the county-by-county election results. In this section we will focus on the two leading candidates for each party: Clinton and Sanders for the Democrats, Trump and Cruz for the Republican. We will be counting the votes this time (instead of the delegate counts).

For this purpose, I have normalized the vote counts to be as if only the two leading candidates were competing (so that their combined vote percentages would sum to 1). This does introduce some bias in the results, but it is an easy and reasonable way to deal with the fact that the number of candidates changed through time due to some candidates dropping from the race.

I tried scraping the same website as in section 1 (you can click on the Detailed Results link to get the county results)…unsucessfully, because the page content is dynamically generated via javascript.
So I resorted to:

  1. manually scrolling down the page to force-generate the html
  2. saving the html code (thanks to google inspector) to my hardrive
  3. scraping the the code locally with rvest

I’ve copied all the files (each file corresponds to a state) on google drive. Please note (if you want to run the code), that the files must be in a “states” subfolder of your working directory.
Also note that I haven’t used the results of Kansas and Minesotta…because they are by district instead of by county, and I couldn’t find a decent mapping between the two (if you do, please let me know!).
After that, I used the choroplethr package to download us census data (5 years, as of 2013). It is ridiculously easy. If you interested in knowing more, Ari has written a tutorial just for that.

A few mutate, filter, dcast and melt later, I had my desired results: a number of votes by state, county and candidate, merged with the us census demographics data. If you are interested in playing with this dataset, I am also making it available here.

I was particularly interested in the relationship between Trump’s popularity and the proportion of the Hispanic population in a given county.
In the map below, I have used hue to represent the popularity of Trump: counties where he was more popular (his vote percentage is higher than his median country-wide) are in green, counties where he is less popular are in purple.
The same idea was applied to the Hispanic population percentage, using saturation this time: a saturated color for counties with a percentage of Hispanics in the top 50%, and a pale color for the others.
And here is the result:

trump_hispanic_01

Based on this, would you say that Trump is unpopular among Hispanics ? I guess it depends ! Looking for example at Florida and Texas, which both have most counties in the top 50% when it comes to percent Hispanic, one could say that the Hispanics of Florida love Trump, while those of Texas, not so much…could it have something to do with the infamous Mexican wall…

Another relationship I was curious about is that between Sander’s popularity and the per capita income in a county:

sanders_income_01

Here the relationship is pretty obvious! A lot of saturated green, and a lot of pale purple, as well as a clear north/south division. Sanders is popular in the North and North-West, and in top-50% counties with regards to per-capita income.

If you’re interested, you can easily map other relationships. Get the code here.
The function call is as easy as:

plot_map("B. Sanders&amp", "per_capita_income")

Section 3: Shiny

In this section, I wanted to be able to slice through the data to get an overall feel of how well a candidate is doing regarding a particular demographics. Where as in the preceding section we could quickly see the difference between states, here the focus is more on the demographics variable.

You can find the app here, and the code here.

For example, let us take a second look at the relationship between Trump’s popularity and the proportion of hispanic population:

trump1

What about Sanders and per capita income:

sanders1

And, just for fun, Ted Cruz with median age:

cruz1

Finally, let us look at all candidates, with the percentage of black population:

all_blacks

Whoa! The difference between Clinton and Sanders there is astonishing!

That’s it for today, I hope you enjoy this post, and that you will have fun playing with the Shiny app.