A post about data denormalization and uncovering dirty data.

Thats Not Normal.

Thats Not Normal.

Lets chat denormalization! Here’s a list of the tables in my database. Ive collected stats from the 2013 TdF for the Yellow (GC), Green (Points) and Polka Dot (Mountains) jersey competitions – the dependent variables. The other tables have stats on rider performance in specific areas throughout the season which I’m hoping will give some indication of TdF performance

Totally tabular!

Totally tabular!

Now that Ive got a big ole database full of stats its time to denormalize the data for analysis. You can find the SQL script I used here. At the time of this writing the script was recently updated to include adding 0s for null points, thanks to this StackOverflow article for help on how to provide default values for nulls.

I chose to use left outer joins from the tdf_gc table based on rider name. This means I will have a table with rows for every rider from the tdf_gc table with columns from the other stats tables I join. If a rider doesnt have stats in a given table (i.e., I wouldnt expect sprinter Mark Cavendish to have an entry in the individual_mountains table) then a null is placed in that column.

Kind of an eye chart, but if you click on that image you will notice that Joaquim Rodriguez is missing quite a lot of information. Doesnt make sense that someone ranked so high in the TdF GC doesnt have any entry in the season GC or PCS Rank.

Thats Not Normal.

Poking around I discovered the culprit!


RODRíGUEZ, RODRÍGUEZ! Or, international case study

Just use UPPER() or LOWER()! Not so fast podnah, that “Í” is a character especial. Fortunately I’m using Postgres 9.3 so I can specify locale on a per query basis.

Using Collate to specify locale

Using “collate” to specify locale

That works for this case, but what about the rest of the riders from other countries? Apart from inspecting each rider manually I am presently unaware of another method for doing case conversion using a dynamic locale. Perhaps I would have benefited from screening the data for such characters especial before inserting them into my database.

Lance Armstrong

I notice that another rider, Daniel Garcia Navarro, is also missing similar data to Rodriguez. Wondering if the same internationalization issue is to blame I check it out.

Le Sigh.

Le Sigh.

An internationalization issue indeed! Some of my data includes both “last names” for Navarro while other data does not. Sadly there is no SQL entity for screening out irregularities in latin last name conventions vs middle names vs multiple first names.

Remember in my last post when I said I probably have more data cleaning to do?

sad Lance

Ive heard it said that data science is primarily about getting the data clean, so I’ll step away from the mic to work some more on the data set. I’m disappointed that I wont have a model to help me pick my Fantasy Tour de France team, but at least I have potato salad and apple pie to console me today. And unicorns, fireworks, etc

Happy Independence Day US!

Happy Birfday #merica!


Lions and Cyclists and Bears

Lions and Cyclists and Bears

Back from a hiatus (vacation) I wanted to post a quick update on my project

Rosters for Le Tour have been announced and I have been hard at work collecting datas. I updated my GitHub with a scraping script for team roster data from ProCyclingStats. I finally have all the stats of interest in my database from 2013 and 2014.

If you want to play along you can create your own Fantasy Tour de France team. Besides trying to predict the winner I may use the model to help pick my team, assuming its ready before registration closes.

Now that all the team and individual data is populated the next step is denormalizing for analysis. Stay tuned!

Le Tour: King of the Mountain

So now that I have a mountain of data, what shall I do with it?

KOM of Data

KOM of Data

Recalling that I have data sets for individual riders and teams, each individual data set contains the top 100 riders in that category. Riders that are good at mountains are typically not good at sprints, so I have different riders in each data set. How do I create a list of riders to track for the TdF?

Historically, the winner of the TdF has been ranked highly in the General Classification (GC), so that would be a good place to start. As I mentioned, the supporting actors (domestiques in cycling language) play a key part as well. Teams announce their Tour de France rosters shortly before the event (as of this writing, teams have yet to announce their final rosters, though we know Sir Bradley Wiggins will most likely not be participating this year, much to his chagrin.) so we dont know yet who will be riding.

Pulling in the top riders (individual GC) for the top ranked teams (team PCS ranking) should give a reasonable approximation for an initial data set, and when team rosters are announced I can cull those from the herd that wont be making the cut.

Maybe next year Wiggo

Maybe next year Wiggo

I imported the .tsv files created by the scraping scripts into a PostgreSQL database. I was curious about the varchar vs text tradeoffs and found this article useful. I decided to leave the data normalized as it came to me from PCS, so each metric has its own table.

An important note – presently I’m looking at data from this year. If I’m going to build a predictive model I will need to have a training data set where I know the outcome for the dependent variable – the winner. Fortunately I don’t have to fabricate one; PCS has the metrics from last year and we already know who won Le Tour. It would be preferable to have several years of examples, or to create a few more training examples based on the data we have, but for now I’m just planning on using the data from 2013. Since the database will have information spanning multiple years I added a “year” field to all the metrics.

What metric keeps track of contaminated meat consumption?

What metric keeps track of contaminated meat consumption?

Adventures in Data Science: Le Tour

Le Tour de France. A 3 week torture fest featuring svelte men in spandex rolling along the French countryside. Armed with (French) pastries, I enjoy tuning in ridiculously early in the morning to watch this soap opera on wheels.

Recently I stumbled across Pro Cycling Stats. It seemed to be a perfect intersection for my interests in cycling and data science, so I hatched a little project to see if I could predict the winner of Le Tour.

Ridiculous? Of course, this is Le Tour after all

podium girls yellowjersey
tour devil

Day 1. Data Gathering

Winning Grand Tours requires a great team, a strong GC candidate and a lot of unquantifiable luck (i.e. not crashing into a labrador, not being run off the road into a barbed wire fence). What data, if any, would help in predicting the next TdF winner?

Pro Cycling Stats keeps track of a ton of information – General Classification (GC), special points (sprints, mountains, prologues), Tours in various parts of the world, Spring Classics performance, etc. In an effort to keep this project manageable I limited myself to about 10 individual and team stats.

ProCyclingStats GC stats

Definitely going to include GC stats…

Using Beautiful Soup I was able to scrape the stats of interest from the Pro Cycling Stats webpage. I created 2 generic py scripts – one for scraping individual data, another for scraping team data. The scripts take a URL argument so I was able to create a shell script to scrape the stats of interest. I chose to do this so I could easily add new stats pages to the analysis.

As I was looking through the data, I noticed that some stats used “.” to separate thousands and others used “.” to indicate decimals. EEInteresting. As you probably guessed, besides formatting differences, the scales of information are different. Team Distance is in tens of thousands of miles, where as a metric called “Most Efficient” was measured as “Ranking of fraction of points scored on maximum points possible.” What is the maximum number of points possible? Oh good, an Explanation link!

no info on most efficient

An excellent explanation.

It would appear that I have a bonafide Real World ™ data set on my hands!

I was delighted that I had these scripts up and running within an hour, with no prior experience using Python. I wasn’t mired down trying to find the right HTTP API for my target framework to just connect to the damn page. Compared to getting something up and running in .NET this was a breeze

The code for the scraping scripts and the shell script is on my GitHub CyclingStats