Simple Math Riddle

Here’s a deceptively simple riddle for you:

Using the numbers 1, 3, 4, 6 exactly once each, and any combination of +, -, *, /, plus parenthesis if you need to, try to arrive at the number: 24.

Give it a try. If you find the solution, I applaud you! (I could’t).

If you don’t (or if like to read recursive algorithms), you can find a solution in R here, where I use the superb data.tree package to do a brute-force random search.

Montreal: breaking & entering

I finally signed the lease for my future apartment. Now it’s time to see whether my new neighborhood is safe or not !


Check out the code to create the map above here.

The 4 red neighborhoods are:

  • 23: Hochelaga-Maisonneuve
  • 26: Côte-des-Neiges
  • 38: Le Plateau-Mont-Royal partie sud
  • 44: Rosemont et La Petite-Patrie

Oh and in case you’re wondering…my neighborhood is “Very low risk” 🙂

TSX Stock Scraper

Whenever I form the unfortunate idea to invest in the stock market, I first start by playing with the google finance stock screener.

This time, instead of burning my money, i thought my time would be better spent building a stock screener of my own.

The step-by-step code is here:

The result is data frame that looks like this:


For an example of visualization:


In the above plot, I’ve removed outliers and displayed some stock symbols in the north-west corner, i.e. stocks with low Price/Earnings ratio and high BookValue/Price ratio.

U.S. Trade in Goods


After hearing Donald Trump repeat ad nauseam that “We [the U.S.] are losing in trade to China, losing to Mexico…”, I decide to check for myself !



  • U.S. exports to China have been steadily increasing for the last 15 years, but the imports from China have sky-rocketed during the same period.
  • Trade balance with Mexico has actually been pretty stable for the last 10 years.
  • Among the top-8 commercial partners, Canada is the 1st importer of U.S. goods. Mexico is second, China is third.
  • Exchanges with Canada have dropped sharply in 2015.





Fetching US census data with R: a short tutorial

In this short, self-contained post, I show how to download and tidy a cool data set of US census data. The post is meant to be the first of a series showcasing data scraping, wrangling, visualization and modelling.


Step 1: Get your free api key

Here’s the link
Install your api key:


Step 2: Select a survey and a table to explore.

There is an excellent variety of data you can play with !
Here’s the link

Step 3: Fetch the data.

In the example below, I’ve chosen to explore the table number B15001 in the 2010-2014 American Community Survey 5-Year Estimates, which contains data related to educational level, by age and gender.

First you need to call the geo.make function. The call below simply means “give me all states and counties”

county_geo <- geo.make(state = '*', county = '*')

Then we actually get the data. It’s important to set the col.names parameter to “pretty” to get understandable variable names.

education_acs <- acs.fetch(geography = county_geo, table.number = 'B15001', 
          col.names = 'pretty', endyear = 2014, span = 5)

What you have at this point is an acs object.
Two slots are of particular interest to us. First, the @geography slot which contains the state/county name in the $NAME attribute.
Let’s go ahead and extract those right away:

state_county <- education_acs@geography$NAME %>% str_split(",")

county <- state_county  %>%
            sapply(`[[`, 1) %>%
            str_replace(' County', '')

state <-  state_county %>%
            sapply(`[[`, 2)

And the @estimate slot which contains the actual census values.
The @estimate slot is actually a matrix:

##  num [1:3220, 1:83] 40922 148019 21257 17750 43768 ...
##  - attr(*, 'dimnames')=List of 2
##   ..$ : chr [1:3220] 'Autauga County, Alabama' 'Baldwin County, Alabama' 'Barbour County, Alabama' 'Bibb County, Alabama' ...

Step 4: Tidy your data

As you can see, there is a separate column for every level of drilldown.
So We have a little bit of work in order to get a tidy dataset. Come on then, let’s get to it !

In the end, what we really want, is a dataframe in long format, with the state and county variables, then one variable for the education level, one for the age group, one for the gender, and finally the census value.

Because there are so many columns, and also because in R we trust,
there is no way in hell we are going to do this manually.

You can check the code below if you’re curious, suffice it to say that the expand.grid base function was super useful !

df_education = NULL

education <- c('Total', '< 9th grade', '< 12th grade', 'High school', 'College', 'Associate', 'Bachelor', 'Graduate')
age <- c('18-24', '25-34', '35-44', '45-64', '65+')
sex = c('Male', 'Female')

columns = c(3:42, 44:83)
df_exp <- expand.grid(education=education, age=age, sex=sex)

for(i in 1:length(columns)){
  df_education <- 
                age=(df_exp$age)[i],                                                education=(df_exp$education)[i],                                                            value=education_acs@estimate[,columns[i]])

I had to include the ‘Total’ level of education in the loop because of the way the columns of the matrix are enumerated, but I don’t actually want to keep it:

df_education %>% filter(education != 'Total')

And guess what my friends…we now have a freakin cool dataset with 5 variables, 1 value and 225400 observations:

##    county    state  sex   age   education value
## 1 Autauga  Alabama Male 18-24 < 9th grade    36
## 2 Baldwin  Alabama Male 18-24 < 9th grade   172
## 3 Barbour  Alabama Male 18-24 < 9th grade   123
## 4    Bibb  Alabama Male 18-24 < 9th grade    68
## 5  Blount  Alabama Male 18-24 < 9th grade    39
## 6 Bullock  Alabama Male 18-24 < 9th grade     0

That’s it for today ! We’re gonna keep it short & sweet. Our data is ready, we can take a break and come back later to play with it.

In the meantime, if you want to get a headstart, or have any suggestions, please feel free to leave a comment.

Next time, we will be doing some vizualisations using this dataset, and maybe we’ll do some webscraping and merge interesting information onto it.

Come back soon !

The full, reproductible and ready to spin code can be found here.
Do remember to install your api key first !

USA – Ethnicity & Income

Following up on my previous post, and inspired by, I hit me that I had a pretty cool dataset I could explore, thanks to the chloroplethr package.

The dataset I’m referring to comes from the US 2013 Census data. Let’s take a look:

##   region total_population percent_white percent_black percent_asian
## 1   1001            54907            76            18             1
## 2   1003           187114            83             9             1
## 3   1005            27321            46            46             0
## 4   1007            22754            75            22             0
## 5   1009            57623            88             1             0
## 6   1011            10746            22            71             0
##   percent_hispanic per_capita_income median_rent median_age
## 1                2             24571         668       37.5
## 2                4             26766         693       41.5
## 3                5             16829         382       38.3
## 4                2             17427         351       39.4
## 5                8             20730         403       39.6
## 6                6             18628         276       39.6

One thing in particular I was curious about was this: how can I show on a map, how much a particular group of the population (e.g. the black population) is hit by poverty.

We do have a percentage of black population as well as a per-capita income for each county. Let us merge this data on the regions data from the chloroplethr package:

county_data  <- county.regions %>% left_join(demographics)
## Joining by: 'region'
##   region county.fips.character state.fips.character
## 1   1001                 01001     autauga    alabama                   01
## 2   1003                 01003     baldwin    alabama                   01
## 3   1005                 01005     barbour    alabama                   01
## 4   1007                 01007        bibb    alabama                   01
## 5   1009                 01009      blount    alabama                   01
## 6   1011                 01011     bullock    alabama                   01
## total_population percent_white percent_black percent_asian
## 1        AL            54907            76            18             1
## 2        AL           187114            83             9             1
## 3        AL            27321            46            46             0
## 4        AL            22754            75            22             0
## 5        AL            57623            88             1             0
## 6        AL            10746            22            71             0
##   percent_hispanic per_capita_income median_rent median_age
## 1                2             24571         668       37.5
## 2                4             26766         693       41.5
## 3                5             16829         382       38.3
## 4                2             17427         351       39.4
## 5                8             20730         403       39.6
## 6                6             18628         276       39.6

The idea will be to rely on the relatively unknown yet uber-cool harmonic mean. This mean has the property that it is high when both components (in my case I will only have two) are high.

Here’s the formula for the harmonic mean of two numbers:

harmonic_mean = function(x, y){

So I will be using the harmonic mean of the percentage of black population and the per-capita income. But to make those measures play nice together, let us first transform then into ranks and map that rank to the [0,1] interval:

county_data$percent_black_ranked <- rank(county_data$percent_black, ties.method='first')
county_data$percent_black_ranked <- county_data$percent_black_ranked / max(county_data$percent_black_ranked)

county_data$per_capita_income_ranked  <- rank(county_data$per_capita_income, ties.method='first')
county_data$per_capita_income_ranked <- 1 - county_data$per_capita_income_ranked / max(county_data$per_capita_income_ranked)

Notice that I took the complement of per-capita income rank to show poverty.

We are now ready to create our map:

county_data$value  <- harmonic_mean(county_data$percent_black_ranked, county_data$per_capita_income_ranked)

county_choropleth(county_data) + scale_fill_brewer(palette=3) + ggtitle('Map of (black % of population &amp;amp; poverty) index')


The dark areas are where there is a both a relatively high percentage of black population and a low per-capita income.

The same exercise with the hispanic population yields:


Out of curiosity, let us look at white % of population and wealth:

county_data$percent_white_ranked <- rank(county_data$percent_white, ties.method='first')
county_data$percent_white_ranked <- county_data$percent_white_ranked / max(county_data$percent_white_ranked)

county_data$per_capita_income_ranked  <- rank(county_data$per_capita_income, ties.method='first')
county_data$per_capita_income_ranked <- county_data$per_capita_income_ranked / max(county_data$per_capita_income_ranked)


us 2016 elections

This post is my contribution to the challenge set up by Ari Lamstein. I wish I had more time to spend on it, but it did at least give me the opportunity to learn to use dplyr, reshape2, ggplot, Shiny, and the choroplethr package. I’m also making heavy use of rvest, which I have explored already in some of my previous posts.

This post is divided in three sections. First, using ggplot, we chart a time view of the race, for both parties. Then we use the choroplethr package to try and display the performance of the candidates in relation to a demographic variable. Finally, a simple Shiny app provides us another way to visualize and slice through the data.

Excited ? Let’s get started !

Section 1: Time view of the race

I’ve decided not to explicitely show the code in this post, but if you are interested, you can find the complete code for this section here. Should you have any questions or comments, don’t hesitate to leave them below.

My data is scraped from: At the time of writing of this post, the website contained election results up to April 9th.

So, here’s how the race is shaping up so far for the republican party:


The grey solid horizontal line is the finish line: it’s the number of pledged delegates a candidate must have in order to be declared the winner (in theory – that is, assuming the pledged candidates don’t change their minds !). The dashed grey line represents 50% of the running total number of delegates: it’s a way to quickly visualize how well the leading candidate (in this case Trump) is doing compared to where he needs to be come the end of the race.

An important observation here is that the red line is below the dashed grey line, which means that even though he is way ahead of his main rival, Trump still needs to pick up the pace if he wants a guaranteed win. And it is a secret for noone that if he doesn’t get the guaranteed win, he is toast. Which is why you’ll often hear Trump say that Kasich should quit, because he actually needs those votes.

Compare this to the picture in the democratic party:


Here you can see that not only does Clinton have a comfortable lead over her opponent, she is also well poised to have the majority by the end of the race, if the trend continues.

Section 2: Map view

We now turn our attention to the county-by-county election results. In this section we will focus on the two leading candidates for each party: Clinton and Sanders for the Democrats, Trump and Cruz for the Republican. We will be counting the votes this time (instead of the delegate counts).

For this purpose, I have normalized the vote counts to be as if only the two leading candidates were competing (so that their combined vote percentages would sum to 1). This does introduce some bias in the results, but it is an easy and reasonable way to deal with the fact that the number of candidates changed through time due to some candidates dropping from the race.

I tried scraping the same website as in section 1 (you can click on the Detailed Results link to get the county results)…unsucessfully, because the page content is dynamically generated via javascript.
So I resorted to:

  1. manually scrolling down the page to force-generate the html
  2. saving the html code (thanks to google inspector) to my hardrive
  3. scraping the the code locally with rvest

I’ve copied all the files (each file corresponds to a state) on google drive. Please note (if you want to run the code), that the files must be in a “states” subfolder of your working directory.
Also note that I haven’t used the results of Kansas and Minesotta…because they are by district instead of by county, and I couldn’t find a decent mapping between the two (if you do, please let me know!).
After that, I used the choroplethr package to download us census data (5 years, as of 2013). It is ridiculously easy. If you interested in knowing more, Ari has written a tutorial just for that.

A few mutate, filter, dcast and melt later, I had my desired results: a number of votes by state, county and candidate, merged with the us census demographics data. If you are interested in playing with this dataset, I am also making it available here.

I was particularly interested in the relationship between Trump’s popularity and the proportion of the Hispanic population in a given county.
In the map below, I have used hue to represent the popularity of Trump: counties where he was more popular (his vote percentage is higher than his median country-wide) are in green, counties where he is less popular are in purple.
The same idea was applied to the Hispanic population percentage, using saturation this time: a saturated color for counties with a percentage of Hispanics in the top 50%, and a pale color for the others.
And here is the result:


Based on this, would you say that Trump is unpopular among Hispanics ? I guess it depends ! Looking for example at Florida and Texas, which both have most counties in the top 50% when it comes to percent Hispanic, one could say that the Hispanics of Florida love Trump, while those of Texas, not so much…could it have something to do with the infamous Mexican wall…

Another relationship I was curious about is that between Sander’s popularity and the per capita income in a county:


Here the relationship is pretty obvious! A lot of saturated green, and a lot of pale purple, as well as a clear north/south division. Sanders is popular in the North and North-West, and in top-50% counties with regards to per-capita income.

If you’re interested, you can easily map other relationships. Get the code here.
The function call is as easy as:

plot_map("B. Sanders&amp", "per_capita_income")

Section 3: Shiny

In this section, I wanted to be able to slice through the data to get an overall feel of how well a candidate is doing regarding a particular demographics. Where as in the preceding section we could quickly see the difference between states, here the focus is more on the demographics variable.

You can find the app here, and the code here.

For example, let us take a second look at the relationship between Trump’s popularity and the proportion of hispanic population:


What about Sanders and per capita income:


And, just for fun, Ted Cruz with median age:


Finally, let us look at all candidates, with the percentage of black population:


Whoa! The difference between Clinton and Sanders there is astonishing!

That’s it for today, I hope you enjoy this post, and that you will have fun playing with the Shiny app.

Loss ratio by variable

In my line of work (property & casualty insurance), one ratio we are constantly monitoring is the loss ratio, which is defined as the ratio of the loss amounts paid to the insured to the premium earned by the insurance company.

Ignoring non claim-specific expenses (e.g. general expenses, marketing, etc.) and investment returns, a loss ratio of 100% means that for the average premium we charge the insured covers exactly the average loss amount, i.e. we break even (the premium corresponding to a loss ratio of 100% is called the pure premium). Note that by definition, everything else being equal, a lower loss ratio translates into higher profits for the insurance company.

Sometimes, we want to look at this loss ratio conditionally on the values of a specific variable. That is, for a particular segmenting variable (age, gender, territory, etc.), we are interested in knowing the loss ratio for each category in this variable, along with the earned premium as a percentage of the total earned premium.

For example, such a report for the Age variable could look like this:

age_group = c("A. [16 - 25]", "B. [25 - 39]", "C. [40 - 64]", "D. [65+ ]   ")
weights = c(0.25, 0.1, 0.35, 0.3)
loss.ratios = c(0.6, 0.9, 0.55, 0.4)

df = data.frame(age_group=age_group, weight=weights, loss.ratio=loss.ratios)
##      age_group weight loss.ratio
## 1 A. [16 - 25]   0.25       0.60
## 2 B. [25 - 39]   0.10       0.90
## 3 C. [40 - 64]   0.35       0.55
## 4 D. [65+ ]      0.30       0.40

The global loss ratio for this portfolio is:

sum(df$weight * df$loss.ratio)
## [1] 0.5525

Here’s the question I’ve been asking myself lately: if I could select only one category for this variable (i.e. one age group) to try and take business measures to improve it’s profitability, which one should it be ? In other words, which of these categories has the biggest negative impact on our overall loss ratios ?

One possible answer would be to find the category which, if we improved it’s loss ratio by x%, would improve the global loss ratio the most. But if x is fixed, then this approach simply selects the category with the biggest weight…

A better solution would be to consider, for each age group, what the loss ratio of the portfolio would be if that age group was removed from consideration. For example, to calculate the impact of age group “A. [16 – 25]”, one can calculate the overall loss ratio of the porfolio consisting of ages groups B. to D., and substract that value from our orginal (entire portfolio, including group A.) loss ratio.

impacts = function(weights, loss.ratios){ = sum(weights * loss.ratios)

  v = numeric()

  for(i in 1:length(weights)){
    w.without = weights[-i]/sum(weights[-i])
    lrs.without = loss.ratios[-i]
    lr = sum(w.without * lrs.without)
    v = c(v, - lr)

  paste0(round(v*100, 1), "%")

df$lr.impact = impacts(weights, loss.ratios)
##      age_group weight loss.ratio lr.impact
## 1 A. [16 - 25]   0.25       0.60      1.6%
## 2 B. [25 - 39]   0.10       0.90      3.9%
## 3 C. [40 - 64]   0.35       0.55     -0.1%
## 4 D. [65+ ]      0.30       0.40     -6.5%

What this tells us is that the age group “B. [25 – 39]” has the biggest upward impact on our overall loss ratio: if we didn’t insure this group (or equivalently, if that group’s loss ratio was equal to the loss ratio of the rest of the portfolio), our loss ratio would be 3.9 points lower.