hadoop - Manually splitting and compressing input for Amazon EMR -


instead of using hadoop-lzo index lzo input file, decided split chunks, compressed lzo close 128mb (since default block size on amazon distribution[1]).

is there wrong (from cluster performance perspective) provide input split , compressed size close default hdfs block size?


Comments

Popular posts from this blog

php - cannot display multiple markers in google maps v3 from traceroute result -

c# - DetailsView in ASP.Net - How to add another column on the side/add a control in each row? -

javascript - firefox memory leak -