hadoop - Manually splitting and compressing input for Amazon EMR -


instead of using hadoop-lzo index lzo input file, decided split chunks, compressed lzo close 128mb (since default block size on amazon distribution[1]).

is there wrong (from cluster performance perspective) provide input split , compressed size close default hdfs block size?


Comments

Popular posts from this blog

c# - DetailsView in ASP.Net - How to add another column on the side/add a control in each row? -

javascript - firefox memory leak -

Trying to import CSV file to a SQL Server database using asp.net and c# - can't find what I'm missing -