hdfs - Will the replicates of Hadoop occupy NameNode's memory -
we know each file in hdfs occupy 300 bytes memory in namenode, because each file has 2 other replicates, 1 file totally occupy 900 bytes memory in namenode, or replicates don't occupy memory in namenode.
looking @ optimisation name node memory usage , performance done @ hadoop-1687 can see memory usage blocks multiplied replication factor. however, memory usage files , directories not have increased cost based on replication.
the number of bytes used block prior change (i.e. in hadoop 0.13) 152 + 72 * replication, giving figure of 368 bytes per block default replication setting of 3. files typically using 250 bytes, , directories 290 bytes, both regardless of replication setting.
the improvements included 0.15 (which did include per-replication saving, there still per-replication cost).
i haven't seen other references indicating per-replication memory usage has been removed.
Comments
Post a Comment