A window function is a variation on an aggregation function. Where an aggregation function, like
mean(), takes n inputs and return a single value, a window function returns n values. The output of a window function depends on all its input values, so window functions don’t include functions that work element-wise, like
round(). Window functions include variations on aggregate functions, like
cummean(), functions for ranking and ordering, like
rank(), and functions for taking offsets, like
In this vignette, we’ll use a small sample of the Lahman batting dataset, including the players that have won an award.
# For each player, find the two years with most hits filter(players, min_rank(desc(H)) <= 2 & H > 0) # Within each player, rank each year by the number of games played mutate(players, G_rank = min_rank(G)) # For each player, find every year that was better than the previous year filter(players, G > lag(G)) # For each player, compute avg change in games played per year mutate(players, G_change = (G - lag(G)) / (yearID - lag(yearID))) # For each player, find all where they played more games than average filter(players, G > mean(G)) # For each, player compute a z score based on number of games played mutate(players, G_z = (G - mean(G)) / sd(G))
There are five main families of window functions. Two families are unrelated to aggregation functions:
The other three families are variations on familiar aggregate functions:
Rolling aggregates operate in a fixed width window. You won’t find them in base R or in dplyr, but there are many implementations in other packages, such as RcppRoll.
Recycled aggregates, where an aggregate is repeated to match the length of the input. These are not needed in R because vector recycling automatically recycles aggregates where needed. They are important in SQL, because the presence of an aggregation function usually tells the database to return only one row per group.
Each family is described in more detail below, focussing on the general goals and how to use them with dplyr. For more details, refer to the individual function documentation.
The ranking functions are variations on a theme, differing in how they handle ties:
x <- c(1, 1, 2, 2, 2) row_number(x) #>  1 2 3 4 5 min_rank(x) #>  1 1 3 3 3 dense_rank(x) #>  1 1 2 2 2
If you’re familiar with R, you may recognise that
min_rank() can be computed with the base
rank() function and various values of the
ties.method argument. These functions are provided to save a little typing, and to make it easier to convert between R and SQL.
These are useful if you want to select (for example) the top 10% of records within each group. For example:
filter(players, cume_dist(desc(G)) < 0.1) #> Source: local data frame [995 x 7] #> Groups: playerID  #> #> playerID yearID teamID G AB R H #> <chr> <int> <fctr> <int> <int> <int> <int> #> 1 bondto01 1880 BSN 76 282 27 62 #> 2 hinespa01 1887 WS8 123 478 83 147 #> 3 hinespa01 1888 IN3 133 513 84 144 #> 4 radboch01 1883 PRO 89 381 59 108 #> # ... with 991 more rows
ntile() divides the data up into
n evenly sized buckets. It’s a coarse ranking, and it can be used in with
mutate() to divide the data into buckets for further summary. For example, we could use
ntile() to divide the players within a team into four ranked groups, and calculate the average number of games within each group.
by_team_player <- group_by(batting, teamID, playerID) by_team <- summarise(by_team_player, G = sum(G)) by_team_quartile <- group_by(by_team, quartile = ntile(G, 4)) summarise(by_team_quartile, mean(G)) #> # A tibble: 4 × 2 #> quartile `mean(G)` #> <int> <dbl> #> 1 1 27.16460 #> 2 2 97.61757 #> 3 3 271.80831 #> 4 4 976.00873
All ranking functions rank from lowest to highest so that small input values get small ranks. Use
desc() to rank from highest to lowest.
You can use them to:
Compute differences or percent changes.
lag() is more convenient than
diff() because for
n - 1 outputs.
Find out when a value changes.
lag() have an optional argument
order_by. If set, instead of using the row order to determine which value comes before another, they will use another variable. This important if you have not already sorted the data, or you want to sort one way and lag another.
Here’s a simple example of what happens if you don’t specify
order_by when you need it:
df <- data.frame(year = 2000:2005, value = (0:5) ^ 2) scrambled <- df[sample(nrow(df)), ] wrong <- mutate(scrambled, running = cumsum(value)) arrange(wrong, year) #> year value running #> 1 2000 0 0 #> 2 2001 1 55 #> 3 2002 4 20 #> 4 2003 9 54 #> 5 2004 16 16 #> 6 2005 25 45 right <- mutate(scrambled, running = order_by(year, cumsum(value))) arrange(right, year) #> year value running #> 1 2000 0 0 #> 2 2001 1 1 #> 3 2002 4 5 #> 4 2003 9 14 #> 5 2004 16 30 #> 6 2005 25 55
Base R provides cumulative sum (
cumsum()), cumulative min (
cummin()) and cumulative max (
cummax()). (It also provides
cumprod() but that is rarely useful). Other common accumulating functions are
cumall(), cumulative versions of
cummean(), a cumulative mean. These are not included in base R, but efficient versions are provided by
cumall() are useful for selecting all rows up to, or all rows after, a condition is true for the first (or last) time. For example, we can use
cumany() to find all records for a player after they played a year with 150 games:
Like lead and lag, you may want to control the order in which the accumulation occurs. None of the built in functions have an
order_by argument so
dplyr provides a helper:
order_by(). You give it the variable you want to order by, and then the call to the window function:
x <- 1:10 y <- 10:1 order_by(y, cumsum(x)) #>  55 54 52 49 45 40 34 27 19 10
This function uses a bit of non-standard evaluation, so I wouldn’t recommend using it inside another function; use the simpler but less concise
R’s vector recycling make it easy to select values that are higher or lower than a summary. I call this a recycled aggregate because the value of the aggregate is recycled to be the same length as the original vector. Recycled aggregates are useful if you want to find all records greater than the mean or less than the median:
While most SQL databases don’t have an equivalent of
quantile(), when filtering you can achieve the same effect with
ntile(). For example,
x > median(x) is equivalent to
ntile(x, 2) == 2;
x > quantile(x, 75) is equivalent to
ntile(x, 100) > 75 or
ntile(x, 4) > 3.
You can also use this idea to select the records with the highest (
x == max(x)) or lowest value (
x == min(x)) for a field, but the ranking functions give you more control over ties, and allow you to select any number of records.
Recycled aggregates are also useful in conjunction with
mutate(). For example, with the batting data, we could compute the “career year”, the number of years a player has played since they entered the league:
mutate(players, career_year = yearID - min(yearID) + 1) #> Source: local data frame [19,113 x 8] #> Groups: playerID [1,322] #> #> playerID yearID teamID G AB R H career_year #> <chr> <int> <fctr> <int> <int> <int> <int> <dbl> #> 1 bondto01 1874 BR2 55 245 25 54 1 #> 2 bondto01 1875 HR1 72 289 32 77 2 #> 3 bondto01 1876 HAR 45 182 18 50 3 #> 4 bondto01 1877 BSN 61 259 32 59 4 #> # ... with 19,109 more rows
Or, as in the introductory example, we could compute a z-score:
mutate(players, G_z = (G - mean(G)) / sd(G)) #> Source: local data frame [19,113 x 8] #> Groups: playerID [1,322] #> #> playerID yearID teamID G AB R H G_z #> <chr> <int> <fctr> <int> <int> <int> <int> <dbl> #> 1 bondto01 1874 BR2 55 245 25 54 0.39424925 #> 2 bondto01 1875 HR1 72 289 32 77 1.02437412 #> 3 bondto01 1876 HAR 45 182 18 50 0.02358756 #> 4 bondto01 1877 BSN 61 259 32 59 0.61664626 #> # ... with 19,109 more rows