hdfs - Hadoop HA Namenode goes down with the Error: flush failed for required journal (JournalAndStream(mgr=QJM to [< ip >:8485, < ip >:8485, < ip >:8485])) -


hadoop namenode goes down everyday once.

fatal namenode.fseditlog (journalset.java:mapjournalsandreporterrors(398)) -   **error: flush failed required journal** (journalandstream(mgr=qjm [< ip >:8485, < ip >:8485, < ip >:8485], stream=quorumoutputstream starting @ txid <>)) java.io.ioexception: timed out waiting 20000ms quorum of nodes respond.     @ org.apache.hadoop.hdfs.qjournal.client.asyncloggerset.waitforwritequorum(asyncloggerset.java:137)     @ org.apache.hadoop.hdfs.qjournal.client.quorumoutputstream.flushandsync(quorumoutputstream.java:107)     @ org.apache.hadoop.hdfs.server.namenode.editlogoutputstream.flush(editlogoutputstream.java:113)     @  

can suggest things need resolving issue?

i using vms journal nodes , master nodes. cause issue?

from error pasted. appears journal nodes not talk nn in timely manner. going on @ time of event?

since mention nodes vms guess overloaded hypervisor or had troubling talking nn jn , zk quorum.


Comments

Popular posts from this blog

html - Styling progress bar with inline style -

java - Oracle Sql developer error: could not install some modules -

How to use autoclose brackets in Jupyter notebook? -