13:32:22,757 INFO : Client environment:user.dir=/home/hdfsuser 13:32:22,757 INFO : Client environment:=/home/hdfsuser 13:32:22,757 INFO : Client environment:user.name=hdfsuser 13:32:22,753 INFO : Client environment:os.name=Linux 13:32:22,753 INFO : Client environment:java.io.tmpdir=/tmp 13:32:22,752 INFO : Client environment:=/apps/hadoop/etc/hadoop:/apps/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/apps/hadoo$ 13:32:22,753 INFO : Client environment:=/apps/hadoop/lib/native 13:32:22,752 INFO : Client environment:java.vendor=Oracle Corporation 13:32:22,628 INFO .tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at hdfs-0/10.10.10.15:9000 13:32:22,229 INFO .tools.DFSZKFailoverController: registered UNIX signal handlers for STARTUP_MSG: classpath = /apps/hadoop/etc/hadoop:/apps/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/apps/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/apps/hadoop/share/hado$STARTUP_MSG: build = -r b3cbbb467e22ea829b3808f4b7b01d07e0bf3842 compiled by 'rohithsharmaks' on T15:56Z STARTUP_MSG: Starting DFSZKFailoverController Hadoop-hdfsuser-zkfc-node15-hdfs-spark-master.log (before i crash a namenode) 13:32:22,216 INFO .tools.DFSZKFailoverController: STARTUP_MSG: ConfiguredFailoverProxyProviderĭescription=Hadoop DFS namenode and datanodeĪfter=syslog.target network.target remote-fs.target nss-lookup.target network-online.target # Set to "0" to disable auto purge feature # The number of snapshots to retain in dataDir # administrator guide before turning on autopurge. # Be sure to read the maintenance section of the # increase this if you need to handle more clients # the maximum number of client connections. # the port at which the clients will connect # do not use /tmp for storage, /tmp here is just # the directory where the snapshot is stored. # sending a request and getting an acknowledgement # The number of ticks that can pass between Zoo.cfg # The number of milliseconds of each tick I launched my journalnodes, I started my DFSZKFailoverController, I formatted my first namenode, I copied with -bootstrapStandby the configuration of my first namenode to the 2 others and I started my cluster.ĭespite all this and no obvious problems in the ZKFC and namenode logs, I can't get a namenode to take over a dying namenode.ĭoes anyone have any idea how to help me? I may have done everything right, but when I kill one of my namenodes, none of them take over. I have followed the procedure three times here. It's already been 15 days that I've been browsing the forums without finding what I'm looking for (maybe I'm not looking in the right place too -) ) I am aware that there has already been bcp of topic dealing with this subject, I have read a lot of them already. I'm coming to you today for a question about the high availability of HDFS using Zookeeper.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |