Edit (2025-01-30): The text and benchmarks about Rfast are not correct anymore. I had to rerub the post for various reasons and the code didnt work anymore. After fixing it, Rfast was running slower than back in January 2024. What du we learn from this? Use renv….
One of my new years resolutions is to blog a bit more on the random shenanigans I do with R. This is one of those.1
write a Java program for retrieving temperature measurement values from a text file and calculating the min, mean, and max temperature per weather station. There’s just one caveat: the file has 1,000,000,000 rows!
I didn’t take long, also thanks to Hacker News, that the challenge spread to other programming languages. The original repository contains a show & tell where other results can be discussed.
Obviously it also spread to R and there is a GitHub repository from Alejandro Hagan dedicated to the challenge. There were some critical discussions on the seemingly bad performance of data.table but that issue thread also evolved to a discussion on other solutions.
The obvious candidates for fast solutions with R are dplyr, data.table, collapse, and polars. From those, it appears that polars might solve the tasks the fastest.
I was curious, how far one can get with base R.
Creating the data
The R repository contains a script to generate benchmark data. For the purpose of this post, I created files with 1e6 and 1e8 rows. Unfortunately, my personal laptop cannot handle 1 billion rows without dying.
Reading the data
All base R functions will profit from reading the state column as a factor instead of a usual string.
D <- data.table::fread("measurements1e6.csv", stringsAsFactors =TRUE)D
measurement state
<num> <fctr>
1: 0.9819694 NC
2: 0.4687150 MA
3: -0.1079713 TX
4: -0.2128782 VT
5: 1.1580985 OR
---
999996: 0.7432489 FL
999997: -1.6855612 KS
999998: -0.1184549 TX
999999: 1.2774375 MS
1000000: -0.2800853 MD
Who would have thought that stringAsFactors = TRUE can be useful.
The obvious: aggregate and split/lapply
The most obvious choice for me was to use aggregate().
sum_stats_vec <-function(x) c(min =min(x), max =max(x), mean =mean(x))aggregate(measurement ~ state, data = D, FUN = sum_stats_vec) |>head()
state measurement.min measurement.max measurement.mean
1 AK -4.1044940643 4.2710094897 0.0030819995
2 AL -3.6350249689 4.5386719185 -0.0078110501
3 AR -3.8138004849 4.1011149408 0.0084538758
4 AZ -4.4150509298 3.9651248287 0.0002287343
5 CA -4.1597256267 4.1024463673 0.0136493032
6 CO -3.8604891180 4.2314151167 -0.0013340964
I was pretty sure that this might be the best solution.
The other obvious solution is to split the data frame according to stats and then lapply the stats calculation on each list element.
state min max mean
AK AK -4.104494 4.271009 0.0030819995
AL AL -3.635025 4.538672 -0.0078110501
AR AR -3.813800 4.101115 0.0084538758
AZ AZ -4.415051 3.965125 0.0002287343
CA CA -4.159726 4.102446 0.0136493032
CO CO -3.860489 4.231415 -0.0013340964
The elegant: by
I stumbled upon by when searching for alternatives. I think it is a quite elegant way of solving a group/summarize task with base R. Unfortunately it returns a list and not a data frame or matrix (I made that an implicit requirement).
In the help for by I stumbled upon a function I wasn’t aware of yet: array2DF!
D$state min max mean
1 AK -4.104494 4.271009 0.0030819995
2 AL -3.635025 4.538672 -0.0078110501
3 AR -3.813800 4.101115 0.0084538758
4 AZ -4.415051 3.965125 0.0002287343
5 CA -4.159726 4.102446 0.0136493032
6 CO -3.860489 4.231415 -0.0013340964
Does exactly what is needed here. For the benchmarks, I will also include a version without the array2DF call, to check its overhead.
Another apply: tapply
In the help for by, I also stumbled upon this sentence
Function by is an object-oriented wrapper for tapply applied to data frames.
So maybe we can construct a solution that uses tapply, but without any inbuilt overhead in by.
min max mean
AK -4.104494 4.271009 0.0030819995
AL -3.635025 4.538672 -0.0078110501
AR -3.813800 4.101115 0.0084538758
AZ -4.415051 3.965125 0.0002287343
CA -4.159726 4.102446 0.0136493032
CO -3.860489 4.231415 -0.0013340964
At this point, I was also curious if the do.call("rbind",list) can be sped up, so I constructed a second tapply solution.
Var1 min max mean
1 AK -4.104494 4.271009 0.0030819995
2 AL -3.635025 4.538672 -0.0078110501
3 AR -3.813800 4.101115 0.0084538758
4 AZ -4.415051 3.965125 0.0002287343
5 CA -4.159726 4.102446 0.0136493032
6 CO -3.860489 4.231415 -0.0013340964
The obscure: reduce
I thought that this should be it, but then I remembered reduce exists. The solution is somewhat similar to split/lapply.
reduce <-function(D) { state_list <-split(D$measurement, D$state)Reduce(function(x, y) { res <-sum_stats_vec(state_list[[y]])rbind(x, data.frame(state = y, mean = res[1], min = res[2], max = res[3])) }, names(state_list), init =NULL)}reduce(D) |>head()
state mean min max
min AK -4.104494 4.271009 0.0030819995
min1 AL -3.635025 4.538672 -0.0078110501
min2 AR -3.813800 4.101115 0.0084538758
min3 AZ -4.415051 3.965125 0.0002287343
min4 CA -4.159726 4.102446 0.0136493032
min5 CO -3.860489 4.231415 -0.0013340964
The unfair contender: Rfast
Pondering about how this functions could be sped up in general, I remembered the package Rfast and managed to construct a solution using this package.
state mean min max
1 AK 0.0030819995 -4.104494 4.271009
2 AL -0.0078110501 -3.635025 4.538672
3 AR 0.0084538758 -3.813800 4.101115
4 AZ 0.0002287343 -4.415051 3.965125
5 CA 0.0136493032 -4.159726 4.102446
6 CO -0.0013340964 -3.860489 4.231415
Pretty sure that this will be the fastest, maybe even competitive with the other big packages!
Benchmark
For better readability I reorder the benchmark results from microbenchmark according to median runtime, with a function provided by Dirk Eddelbuettel.
reorderMicrobenchmarkResults <-function(res, order ="median") {stopifnot("Argument 'res' must be a 'microbenchmark' result"=inherits(res, "microbenchmark")) smry <-summary(res) res$expr <-factor(res$expr,levels =levels(res$expr)[order(smry[["median"]])],ordered =TRUE ) res}
First up the “small” dataset with 1e6 rows. I added the dplyr and data.table results as references.
sum_stats_list <-function(x) list(min =min(x), max =max(x), mean =mean(x))sum_stats_tibble <-function(x) tibble::tibble(min =min(x), max =max(x), mean =mean(x))bench1e6 <- microbenchmark::microbenchmark(aggregate =aggregate(measurement ~ state, data = D, FUN = sum_stats_vec),split_lapply =split_lapply(D),array2DF_by =array2DF(by(D$measurement, D$state, sum_stats_vec)),raw_by =by(D$measurement, D$state, sum_stats_vec),docall_tapply =do.call("rbind", tapply(D$measurement, D$state, sum_stats_vec)),sapply_tapply =sapply(tapply(D$measurement, D$state, sum_stats_vec), rbind),array2DF_tapply =array2DF(tapply(D$measurement, D$state, sum_stats_vec)),reduce =reduce(D),Rfast =Rfast(D),dplyr = D |> dplyr::group_by(state) |> dplyr::summarise(sum_stats_tibble(measurement)) |> dplyr::ungroup(),datatable = D[, .(sum_stats_list(measurement)), by = state],times =25)
Warning in microbenchmark::microbenchmark(aggregate = aggregate(measurement ~ :
less accurate nanosecond times to avoid potential integer overflows
First of, I was very surprised by the bad performance of aggregate. I looked at the source code and it appears to be a more fancy lapply/split type of functions with a lot of if/else and for which do slow down the function heavily. For the benchmark with the bigger dataset, I actually discarded the function because it was way too slow.
Apart from that, there are three groups. Rfast and data.table are the fastest. The second group are the tapply versions. I am quite pleased with the fact that the data frame building via do.call, sapply and array2DF are very much comparable, because I really like my array2DF discovery. The remaining solutions are pretty much comparable. I am surprised though, that dplyr falls behind many of the base solutions.2
Moving on to the 100 million file to see if size makes a difference.
Again we see three groups, but this time with clearer cut-offs. Rfast and data.table dominate and Rfast actually has a slight edge! The second group are tapply, reduce and dplyr. Surprisingly, by falls behind here, together with split/lapply.
Update(2024-01-09)
I managed to run some of the functions on a 1e9 file.
The previously fastest base solutions fall of a little bit, but are in my opinion still very good and still comparable with dplyr! Also, I learned that one can reorder microbenchmark results with the print command!
Summary
This was a fun little exercise, and I think I learned a lot of new things about base R, especially the existence of arry2DF!
What was surprising is how competitive base R actually is with the “big guns”. I was expecting a much bigger margin between data.table and the base solutions, but that was not the case.
Footnotes
Also inspired by a post of Danielle Navarro about the cultural loss of today’s serious blogging business.↩︎