本文将在单机上面安装三个 Zookeeper 实例,让他们组成一个 Zookeeper 集群。注意,仅介绍如何在 Linux 下面搭建集群,Windows 搭建方式和 Linux 类似,读者自行尝试。
将上节下载的安装包 apache-zookeeper-3.9.1-bin 复制三份,并且分别重命名为 zooKeeper1、zookeeper2 和 zookeeper3,如下:
# 复制安装包 hxstrive@localhost:~$ cp -r apache-zookeeper-3.9.1-bin zookeeper1 hxstrive@localhost:~$ cp -r apache-zookeeper-3.9.1-bin zookeeper2 hxstrive@localhost:~$ cp -r apache-zookeeper-3.9.1-bin zookeeper3 # 查看结果 hxstrive@localhost:~$ ls -ls | grep zookeeper 4 drwxrwxr-x 8 hxstrive hxstrive 4096 10月 27 13:31 apache-zookeeper-3.9.1-bin 19848 -rwxrw-rw- 1 hxstrive hxstrive 20323219 10月 27 12:59 apache-zookeeper-3.9.1-bin.tar.gz 4 drwxrwxr-x 8 hxstrive hxstrive 4096 10月 27 13:44 zookeeper1 4 drwxrwxr-x 8 hxstrive hxstrive 4096 10月 27 13:44 zookeeper2 4 drwxrwxr-x 8 hxstrive hxstrive 4096 10月 27 13:44 zookeeper3
分别进入到每个 Zookeeper 安装包目录创建 data 子目录,如下:
hxstrive@localhost:~$ mkdir ~/zookeeper1/data hxstrive@localhost:~$ mkdir ~/zookeeper2/data hxstrive@localhost:~$ mkdir ~/zookeeper3/data
分别刚才创建的 data 目录下面创建一个 myid 文件,该文件的内容为该 zookeeper 的唯一ID,如下:
hxstrive@localhost:~$ echo 1 > ~/zookeeper1/data/myid hxstrive@localhost:~$ echo 2 > ~/zookeeper2/data/myid hxstrive@localhost:~$ echo 3 > ~/zookeeper3/data/myid hxstrive@localhost:~$ cat ~/zookeeper1/data/myid 1 hxstrive@localhost:~$ cat ~/zookeeper2/data/myid 2 hxstrive@localhost:~$ cat ~/zookeeper3/data/myid 3
和单机部署一样,进入到每个 ZooKeeper 安装包的 conf 目录,复制 zoo_simple.cfg 配置文件,重名为 zoo.cfg 配置文件,然后修改配置文件的端口和集群列表。
zookeeper1/zoo.cfg:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/home/hxstrive/zookeeper1/data # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 ## Metrics Providers # # https://prometheus.io Metrics Exporter #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider #metricsProvider.httpHost=0.0.0.0 #metricsProvider.httpPort=7000 #metricsProvider.exportJvmInfo=true server.1=127.0.0.1:2888:3888 server.2=127.0.0.1:2889:3889 server.3=127.0.0.1:2890:3890
zookeeper2/zoo.cfg:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/home/hxstrive/zookeeper2/data # the port at which the clients will connect clientPort=2182 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 ## Metrics Providers # # https://prometheus.io Metrics Exporter #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider #metricsProvider.httpHost=0.0.0.0 #metricsProvider.httpPort=7000 #metricsProvider.exportJvmInfo=true server.1=127.0.0.1:2888:3888 server.2=127.0.0.1:2889:3889 server.3=127.0.0.1:2890:3890
zookeeper3/zoo.cfg:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/home/hxstrive/zookeeper3/data # the port at which the clients will connect clientPort=2183 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 ## Metrics Providers # # https://prometheus.io Metrics Exporter #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider #metricsProvider.httpHost=0.0.0.0 #metricsProvider.httpPort=7000 #metricsProvider.exportJvmInfo=true server.1=127.0.0.1:2888:3888 server.2=127.0.0.1:2889:3889 server.3=127.0.0.1:2890:3890
使用 ./zkServer.sh start 命令分别启动每个 Zookeeper,如下:
zookeeper1:
hxstrive@localhost:~/zookeeper1/bin$ ./zkServer.sh start /usr/bin/java ZooKeeper JMX enabled by default Using config: /home/hxstrive/zookeeper1/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
zookeeper2:
hxstrive@localhost:~/zookeeper2/bin$ ./zkServer.sh start /usr/bin/java ZooKeeper JMX enabled by default Using config: /home/hxstrive/zookeeper2/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
zookeeper3:
hxstrive@localhost:~/zookeeper3/bin$ ./zkServer.sh start /usr/bin/java ZooKeeper JMX enabled by default Using config: /home/hxstrive/zookeeper3/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
使用 ./zkServer.sh status 命令分别查看每个 Zookeeper 的状态,如下:
zookeeper1:
hxstrive@localhost:~/zookeeper1/bin$ ./zkServer.sh status /usr/bin/java ZooKeeper JMX enabled by default Using config: /home/hxstrive/zookeeper1/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Client SSL: false. Mode: follower
zookeeper2:
hxstrive@localhost:~/zookeeper2/bin$ ./zkServer.sh status /usr/bin/java ZooKeeper JMX enabled by default Using config: /home/hxstrive/zookeeper2/bin/../conf/zoo.cfg Client port found: 2182. Client address: localhost. Client SSL: false. Mode: leader
zookeeper3:
hxstrive@localhost:~/zookeeper3/bin$ ./zkServer.sh status /usr/bin/java ZooKeeper JMX enabled by default Using config: /home/hxstrive/zookeeper3/bin/../conf/zoo.cfg Client port found: 2183. Client address: localhost. Client SSL: false. Mode: follower
如何验证呢?我们将分别开启上个 zkCli 连接,然后在 zookeeper1/zkCli 中创建一个路径为 /hxstrive 的节点,然后在 zookeeper2 和 zookeeper3 中查看新创建节点,如下:
# 创建节点 # 注意:zkCli 默认连接到本地 2181 端口 hxstrive@localhost:~/zookeeper1/bin$ ./zkCli.sh -server 127.0.0.1:2181 [zk: localhost:2181(CONNECTED) 2] create /hxstrive www.hxstrive.com Created /hxstrive [zk: localhost:2181(CONNECTED) 3] ls / [hxstrive, zookeeper] # 连接到 zookeeper2 hxstrive@localhost:~/zookeeper1/bin$ ./zkCli.sh -server 127.0.0.1:2182 ... [zk: 127.0.0.1:2182(CONNECTED) 0] ls / [hxstrive, zookeeper] [zk: 127.0.0.1:2183(CONNECTED) 1] # 连接到 zookeeper3 hxstrive@localhost:~/zookeeper1/bin$ ./zkCli.sh -server 127.0.0.1:2183 ... WatchedEvent state:SyncConnected type:None path:null zxid: -1 [zk: 127.0.0.1:2183(CONNECTED) 0] ls / [hxstrive, zookeeper] [zk: 127.0.0.1:2183(CONNECTED) 1]