amazon web services - zookeeper installation on multiple AWS EC2instances -


i new zookeeper , aws ec2. trying install zookeeper on 3 ec2 instances.

as per zookeeper document, have installed zookeeper on 3 instances, created zoo.conf , add below configuration:

ticktime=2000 initlimit=10 synclimit=5 datadir=/opt/zookeeper/data clientport=2181 server.1=localhost:2888:3888 server.2=<public ip of ec2 instance 2>:2889:3889 server.3=<public ip of ec2 instance 3>:2890:3890 

also have created myid file on 3 instances /opt/zookeeper/data/myid per guideline..

i have couple of queries below:

  1. whenever starting zookeeper server on each instance, start in standalone mode.(as per logs)

  2. can above configuration gonna connect each other? port 2889:3889 & 2890:38900 - these port about. can need configure on ec2 machine or need give other port against it?

  3. is need create security group open these connection? not sure how in ec2 instance.

  4. how confirm 3 zookeeper has started , can communicate each other?

the zookeeper configuration designed such can install exact same configuration file on servers in cluster without modification. makes ops bit simpler. component specifies configuration local node myid file.

the configuration you've defined not 1 can shared across servers. of servers in server list should binding private ip address accessible other nodes in network. you're seeing server start in standalone mode because you're binding localhost. so, problem other servers in cluster can't see localhost.

your configuration should more like:

ticktime=2000 initlimit=10 synclimit=5 datadir=/opt/zookeeper/data clientport=2181 server.1=<private ip of ec2 instance 1>:2888:3888 server.2=<private ip of ec2 instance 2>:2888:3888 server.3=<private ip of ec2 instance 3>:2888:3888 

the 2 ports listed in each server definition respectively quorum , election ports used zookeeper nodes communicate 1 internally. there's no need modify these ports, , should try keep them same across servers consistency.

additionally, said should able share exact same configuration file across instances. thing should have change myid file.

you need create security group , open client port available clients , quorum/election ports accessible other zookeeper servers.

finally, might want in ui manage cluster. netflix makes decent ui give view of cluster , cleaning old logs , storing snapshots s3 (zookeeper takes snapshots not delete old transaction logs, disk fill if they're not removed). once it's configured correctly, should able see zookeeper servers connecting each other in logs well.

edit

@czerasz notes starting version 3.4.0 can use autopurge.snapretaincount , autopurge.purgeinterval directives keep snapshots clean.

@chomp notes users have had use 0.0.0.0 local server ip zookeeper configuration work on ec2. in other words, replace <private ip of ec2 instance 1> 0.0.0.0 in configuration file on instance 1. counter way zookeeper configuration files designed may necessary on ec2.


Comments

Popular posts from this blog

google chrome - Developer tools - How to inspect the elements which are added momentarily (by JQuery)? -

angularjs - Showing an empty as first option in select tag -

php - Cloud9 cloud IDE and CakePHP -