hdfs - Hadoop HA Namenode goes down with the Error: flush failed for required journal (JournalAndStream(mgr=QJM to [< ip >:8485, < ip >:8485, < ip >:8485])) -


hadoop namenode goes down everyday once.

fatal namenode.fseditlog (journalset.java:mapjournalsandreporterrors(398)) -   **error: flush failed required journal** (journalandstream(mgr=qjm [< ip >:8485, < ip >:8485, < ip >:8485], stream=quorumoutputstream starting @ txid <>)) java.io.ioexception: timed out waiting 20000ms quorum of nodes respond.     @ org.apache.hadoop.hdfs.qjournal.client.asyncloggerset.waitforwritequorum(asyncloggerset.java:137)     @ org.apache.hadoop.hdfs.qjournal.client.quorumoutputstream.flushandsync(quorumoutputstream.java:107)     @ org.apache.hadoop.hdfs.server.namenode.editlogoutputstream.flush(editlogoutputstream.java:113)     @  

can suggest things need resolving issue?

i using vms journal nodes , master nodes. cause issue?

from error pasted. appears journal nodes not talk nn in timely manner. going on @ time of event?

since mention nodes vms guess overloaded hypervisor or had troubling talking nn jn , zk quorum.


Comments

Popular posts from this blog

Django REST Framework perform_create: You cannot call `.save()` after accessing `serializer.data` -

Why does Go error when trying to marshal this JSON? -