hadoop - Manually splitting and compressing input for Amazon EMR -


instead of using hadoop-lzo index lzo input file, decided split chunks, compressed lzo close 128mb (since default block size on amazon distribution[1]).

is there wrong (from cluster performance perspective) provide input split , compressed size close default hdfs block size?


Comments

Popular posts from this blog

css - Text drops down with smaller window -

c# - DetailsView in ASP.Net - How to add another column on the side/add a control in each row? -

ruby on rails - Authlogic - how to make a registration and don't log in the new account? -