With these small datasets, in practice there is only a second or so difference. So, on my relatively fast laptop, it seems that data.table’s list approach is just over twice as fast as the tidyverse appproach. # data.table list function: 0.723 sec elapsed Tic("data.table list function") # starting the timerĭata.table_summary_table <- example_data.table[,list(īursts_kkm = 1000 * (sum(bursts)/sum(length))), # -Įxample_data.table <- setDT(example_dataframe) # turn the ame into a data.table, which is a special format of table # testing data.table summarising function # dplyr summarise function: 1.594 sec elapsedĤ) test speed of data.table’s list function #. # in this script we generate a random dataset of pipes and bursts that we can use to test scriptsīursts_kkm = 1000 * (sum(bursts)/sum(length))) generate a random dataset that we will use to summarise # generating random dataset Load up the packages you will need for this session # Now load up the packages you will need for this session using the library or require functionĢ. # Install the additional packages to your computer (you normally just have to do this once)
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |