gilgamath


Lego Pricing.

Mon 06 March 2017 by Steven E. Pav

It is time to get kiddo a new Lego set, as he's been on a bender this week, building everything he can get his hands on. I wanted to optimize play time per dollar spent, so I set out to look for Lego pricing data.

Not surprisingly, there are a number of good sources for this data. The best I found was at brickset. Sign up for an account, then go to their query builder. I built a query requesting all sets from 2011 onwards, then selected the CSV option, copied the data into my clipboard, then dumped it via xclip -o > brickset_db.csv. The brickset data is updated over time, so there's no reason to prefer my file to one you download yourself.

First I load the data in R, filter based on availability of Piece and Price data, then remove certain themes (Books, Duplo, and so on). I then subselect themes based on having a large range of prices and of number of pieces:

library(readr)
library(dplyr)
indat <- readr::read_csv('../data/brickset_db.csv') %>%
    select(Number,Theme,Subtheme,Year,Name,Pieces,USPrice) 
## Rows: 4843 Columns: 18
## -- Column specification ----------------------------------------------------------------------------------------------------------------------------------
## Delimiter: ","
## chr  (7): Number, Theme, Subtheme, Name, ImageURL, Owned, Wanted
## dbl (10): SetID, Variant, Year, Minifigs, Pieces, UKPrice, USPrice, CAPrice,...
## lgl  (1): Rating
## 
## i Use `spec()` to retrieve the full column specification for this data.
## i Specify the column types or set `show_col_types = FALSE` to quiet this message.
subdat <- indat %>%
    filter(!is.na(Pieces),Pieces >= 10,
                 !is.na(USPrice),USPrice > 1,
                 !grepl('^(Books|Mindstorms|Duplo|.+Minifigures|Power Func|Games|Education|Serious)',Theme)) 

subok <- subdat %>%
    group_by(Theme) %>%
        summarize(many_sets=(sum(!is.nan(USPrice)) >= 10),
                     piece_spread=((max(Pieces) / min(Pieces)) >= 5),
                     price_spread=((max(USPrice) / min(USPrice)) >= 4)) %>%
    ungroup() %>%
    filter(many_sets & piece_spread & price_spread) %>% 
    select(-many_sets,-piece_spread,-price_spread)

subdat <- subdat %>%
    inner_join(subok,by='Theme …
read more

Odds of Winning Your Oscar Pool.

Mon 30 January 2017 by Steven E. Pav

In a previous blog post, I used a Bradley-Terry model to analyze Oscar Best Picture winners, using the best picture dataset. In that post I presented the results of likelihood tests which showed 'significant' relationships between winning the Best Picture category and conomination for other awards, MovieLens ratings, and (spuriously) number of IMDb votes. It can be hard to interpret the effect sizes and \(t\) statistics from a Bradley-Terry model. So here I will try to estimate the probability of correctly guessing the Best Picture winner using this model.

There is no apparent direct translation from the coefficients of the model fit to the probability of correctly forecasting a winner. Nor can you transform the maximized likelihood, or an R-squared. Moreover, it will depend on the number of nominees (traditionally there were only 5 Best Picture nominations--these days it's upwards of 9), and how they differ in the independent variables. Here I will keep it simple and use cross validation.

I modified the oslm code to include a predict method. So here, I load the data and the code, and remove duplicates and restrict the data to the period after 1945. I construct the model formula, based on co-nomination, then test in three ways:

  • A purely 'in sample' validation where all the data are used to build the model, then tested. (The film with the highest forecast probability of winning is chosen as the predicted winner, of course.) This should give the most optimistic view of performance, even though the likelihood maximization problem does not directly select for this metric.
  • A walk-forward cross validation where the data up through year \(y-1\) are used to build the model, then it is used to forecast the winners in year \(y\). This is perhaps the most honest kind of cross validation for time …
read more

Predicting Best Picture Winners.

Thu 26 January 2017 by Steven E. Pav

In a previous blog post, I described some data I had put together for predicting winners in the Best Picture category of the Oscars. Here I will use a Bradley-Terry model to describe this dataset.

To test these models, I wrote an R function called oslm. I have posted the code. This code allows one to model the likelihood of winning an award as a function of some independent variables on each film, taking into account that one and only one film wins in a given year. The code supports computation of the likelihood function (and gradient and Hessian), and allows the maxLik package to do the heavy lifting. Supposing one has a data frame with boolean column winner to denote winners and year to hold the award year, and some independent variables, say x1, x2, and so on. Then one can invoke this code as

modl <- oslm(winner:year ~ x1 + x2,data=my_dataframe)

This is a bit heterodox, using the colon in the left hand side. However, I wasn't sure where else to put it, and the code was not too vile to write. Since I did not know the name of this model, I did not know what existing packages supported this kind of analysis, so I wrote my own silly function.

Who's a winner?

Let's use this data and code instead of talking about it. First, I load the data and then rid it of duplicates (sorry about those), and convert some Boolean independent variables to numeric. I source the oslm code and then try a very simple model: looking at films from 1950 onward, can I predict the probability of winning Best Picture in terms of the (log of the) number of votes it receives on IMDb, stored in the votes variable:

library(readr)
library(dplyr …
read more

Best Picture?

Sun 22 January 2017 by Steven E. Pav

For a brief time I found myself working in the field of film analytics. One of our mad scientist type projects at the time was trying to predict which films might win an award. As a training exercise, we decided to analyze the Oscars.

With such a great beginning, you might be surprised to find the story does not end well. Collecting the data for such an analysis was a minor endeavor. At the time we had scraped and cobbled together a number of different databases about films, but connecting them to each other was a huge frustration. Around the time we would have been predicting the Oscars, the floor fell out from our funding and we were unemployed three weeks after they announced the Oscar 2015 winners.

Our loss is your gain, as I am now releasing the first cut of the data frame I was using. The data are available in a CSV file here. The columns are as follows:

  • year is the year of the Oscars.
  • category should always be Best Picture here.
  • film is the title.
  • etc is extra information to identify the film.
  • winner is a Boolean for whether the film won in that category.
  • id and movie_id are internal IDs, and have no use for you.
  • ttid is the best guess for the IMDb 'tt ID'.
  • title and production_year are from the IMDb data.
  • votes are the total number of votes in IMDb for the IMDb film. (This is an old cut of the data.)
  • vote_mean, vote_sd are the mean and standard deviation of user votes for the film in IMDb.
  • vote1 and vote10 are the proportion of 1- and 10-star votes for the film in IMDb.
  • I do not remember what series is.
  • total_gross is one estimate of gross receipts, and bom is …
read more