i'm trying use data.table speed processing of large data.frame (300k x 60) made of several smaller merged data.frames. i'm new data.table. code far follows library(data.table) = data.table(index=1:5,a=rnorm(5,10),b=rnorm(5,10),z=rnorm(5,10)) b = data.table(index=6:10,a=rnorm(5,10),b=rnorm(5,10),c=rnorm(5,10),d=rnorm(5,10)) dt = merge(a,b,by=intersect(names(a),names(b)),all=t) dt$category = sample(letters[1:3],10,replace=t) and wondered if there more efficient way following summarize data. summ = dt[i=t,j=list(a=sum(a,na.rm=t),b=sum(b,na.rm=t),c=sum(c,na.rm=t), d=sum(d,na.rm=t),z=sum(z,na.rm=t)),by=category] i don't want type 50 column calculations hand , eval(paste(...)) seems clunky somehow. i had @ example below seems bit complicated needs. thanks how summarize data.table across multiple columns you can use simple lapply statement .sd dt[, lapply(.sd, sum, na.rm=true), by=category ] category index b ...