hadoop - Manually splitting and compressing input for Amazon EMR -


instead of using hadoop-lzo index lzo input file, decided split chunks, compressed lzo close 128mb (since default block size on amazon distribution[1]).

is there wrong (from cluster performance perspective) provide input split , compressed size close default hdfs block size?


Comments

Popular posts from this blog

php - cannot display multiple markers in google maps v3 from traceroute result -

php - Boolean search on database with 5 million rows, very slow -

css - Text drops down with smaller window -