安装部署
MacOS安装RabbitMQ
API网关部署
Centos安装Chrome浏览器
gitlab
gitlab-cicd
gitlab-runner安装配置
Gitlab-CICD实践篇
gitlab维护
gitlab备份-异地恢复-升级
gitlab从13.10.2升级至14.4.1出现的问题
Let's Encrypt
Let's Encrypt 申请证书
certbot发起web认证请求流程
安装CMDBUILD
vim配置
keepalived
配置样例
wine tips
supervisord
kafka
zookeeper
minio
django
jupyter
supervisor管理的kafka zookeeper集群部署
dnsmasq
apisix
APISIX结合skywalking对各个服务的访问情况进行监控
APISIX通过consul动态发现服务
Grafana Labs
Loki(stack)
etcd
redis
mongodb
mongodb添加仲裁节点
consul
sqlite更新升級
openssl安裝升級
emby
inotify-tools
caddy
caddy规则
ohmyzsh tips
debian-nvidia-container-runtime
openwrt+802.1x+freeradius+casdoor
本文档使用 MrDoc 发布
-
+
首页
supervisor管理的kafka zookeeper集群部署
# kafka zookeeper集群部署 ## 基本信息 - 操作系统:Ubuntu 20.04 - kafka版本:2.5.1 - zookeeper版本:3.5.8 - openjdk版本:1.8.0_292 - 服务器信息 | 序号 | IP | broker_id | 端口 | | ---- | ------------- | --------- | ---- | | 1 | 192.168.1.100 | 0 | 9092 | | 2 | 192.168.1.101 | 1 | 9092 | | 3 | 192.168.1.102 | 2 | 9092 | ## 部署 ### 安装supervisord ``` apt-get install -y supervisor systemctl enable supervisor systemctl start supervisor ``` ### zookeeper - zookeeper配置 ```shell # 解压到安装目录:/opt/zookeeper后配置环境变量 cat >> /etc/profile <<\EOF export ZOOKEEPER_HOME=/opt/zookeeper export PATH=$PATH:$ZOOKEEPER_HOME/bin EOF source /etc/profile ``` ```shell # 配置zk集群,server.$id有几个写几个,后面需要在对应的zk节点中添加id # server.$A=$B:$C:$D # A=服务器id,B=服务器ip,C=信息交换端口,D=选举端口 cat > $ZOOKEEPER_HOME/conf/zoo.cfg <<EOF tickTime=2000 initLimit=10 syncLimit=5 dataDir=$ZOOKEEPER_HOME/data dataLogDir=$ZOOKEEPER_HOME/logs clientPort=2181 server.1=192.168.1.100:2888:3888 server.2=192.168.1.101:2888:3888 server.3=192.168.1.102:2888:3888 EOF mkdir -p $ZOOKEEPER_HOME/{data,log} # server.1 echo 1 > $ZOOKEEPER_HOME/data/myid # server.2 echo 2 > $ZOOKEEPER_HOME/data/myid # server.3 echo 3 > $ZOOKEEPER_HOME/data/myid ``` ```shell # jvm配置-可选项 cat > $ZOOKEEHER_HOME/conf/java.env <<\EOF export JVMFLAGS="-Xms4096m -Xmx4096m $JVMFLAGS" EOF ``` - 配置supervisor 使用supervisor来管理zookeeper和kafka的启动,priority要比kafka小才能优先启动zookeeper ```shell cat > /etc/supervisor/conf.d/zookeeper.conf <<EOF [program:zookeeper] ; environment按实际的环境变量来配置 directory=$ZOOKEEPER_HOME/bin command=$ZOOKEEPER_HOME/bin/zkServer.sh start-foreground autostart=true startsecs=30 autorestart=unexpected startretries=3 user=root priority=90 redirect_stderr=true stdout_logfile_maxbytes=20MB stdout_logfile_backups=20 ; stdout 日志文件,需要注意当指定目录不存在时无法正常启动,所以需要手动创建目录(supervisord会自动创建日志文件) stdout_logfile=$ZOOKEEPER_HOME/logs/zookeeperd.log stopasgroup=false killasgroup=false EOF # 重启supervisor supervisorctl reload # 查看zookeeper状态 supervisorctl status zookeeper ``` - 验证 ```shell # 使用客户端链接本地zkserver $ZOOKEEPER_HOME/bin/zkCli.sh [zk: localhost:2181(CONNECTED) 0] ls / [zookeeper] # 链接远程的zkserver [zk: localhost:2181(CONNECTED) 1] connect 192.168.1.100:2181 [zk: localhost:2181(CONNECTED) 2] connect 192.168.1.102:2181 [zk: localhost:2181(CONNECTED) 3] connect 192.168.1.101:2181 ``` ``` # 查看zkserver状态,follower,leader~~ $ZOOKEEPER_HOME/bin/zkServer.sh status ``` ### kafka - 配置kafka ```shell # 解压到安装目录:/opt/kafka后配置环境变量 cat >> /etc/profile <<\EOF export KAFKA_HOME=/opt/kafka export PATH=$PATH:$KAFKA_HOME/bin EOF source /etc/profile ``` ```shell # 按分配设置id变量 export KAFKA_ID=0 # 写入配置 cat > $KAFKA_HOME/config/server.properties <<EOF broker.id=$KAFKA_ID num.network.threads=12 num.io.threads=16 socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576 socket.request.max.bytes=104857600 queued.max.requests=32 fetch.purgatory.purge.interval.requests=200 producer.purgatory.purge.interval.requests=200 log.dirs=$KAFKA_HOME/kafka-logs num.partitions=3 num.recovery.threads.per.data.dir=1 auto.create.topics.enable=false log.index.interval.bytes=4096 log.index.size.max.bytes=10485760 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 log.flush.interval.ms=10000 log.flush.interval.messages=20000 log.flush.scheduler.interval.ms=2000 log.roll.hours=168 zookeeper.connect=192.168.1.100:2181,192.168.1.102:2181,192.168.1.101:2181 group.initial.rebalance.delay.ms=3000 num.replica.fetchers=3 replica.fetch.max.bytes=20971520 replica.fetch.wait.max.ms=500 replica.high.watermark.checkpoint.interval.ms=5000 replica.socket.timeout.ms=30000 replica.socket.receive.buffer.bytes=65536 replica.lag.time.max.ms=10000 controller.socket.timeout.ms=30000 controller.message.queue.size=10 default.replication.factor=2 auto.leader.rebalance.enable=true unclean.leader.election.enable=true delete.topic.enable=true zookeeper.session.timeout.ms=30000 zookeeper.connection.timeout.ms=30000 zookeeper.sync.time.ms=4000 message.max.bytes=20971520 offsets.retention.minutes=20160 EOF ``` ```shell # kafka内存配置-按需优化 sed -i 's/-Xmx1G -Xms1G/-Xmx6G -Xms6G/' $KAFKA_HOME/bin/kafka-server-start.sh ``` - 配置supervisor 使用supervisor来管理zookeeper和kafka的启动,zookeeper的priority要比kafka小才能优先启动zookeeper ```shell cat > /etc/supervisor/conf.d/kafka.conf <<EOF [program:kafka] ; environment需要按照实际的环境变量配置 ; environment=PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/opt/zookeeper/bin:/opt/zookeeper/bin:/opt/kafka/bin" directory=$KAFKA_HOME/bin command=$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties autostart=true startsecs=30 autorestart=true startretries=3 user=root priority=999 redirect_stderr=true stdout_logfile_maxbytes=20MB stdout_logfile_backups = 20 ; stdout 日志文件,需要注意当指定目录不存在时无法正常启动,所以需要手动创建目录(supervisord 会自动创建日志文件) stdout_logfile=$KAFKA_HOME/kafka-logs/kafkad.log stopasgroup=false killasgroup=false EOF # 重新读取supervisor配置文件 supervisorctl reread # 更新supervisor配置并重启变更了配置文件的服务 supervisorctl update ``` - 验证 ```shell # 创建topic,创建后会同步到其它节点 kafka-topics.sh --create --bootstrap-server 192.168.1.100:9092 --replication-factor 3 --partitions 1 --topic test-after-install ``` ```shell # 查询topic kafka-topics.sh --list --bootstrap-server 192.168.1.100:9092 test-after-install kafka-topics.sh --list --bootstrap-server 192.168.1.102:9092 test-after-install kafka-topics.sh --list --bootstrap-server 192.168.1.101:9092 test-after-install ``` ```shell # 创建消息 kafka-console-producer.sh --broker-list 192.168.1.101:9092 --topic test-after-install # 消息内容 >install succeed. ``` ```shell # 消费消息 kafka-console-consumer.sh --bootstrap-server 192.168.1.101:9092 --topic test-after-install --from-beginning install succeed. Processed a total of 1 messages kafka-console-consumer.sh --bootstrap-server 192.168.1.100:9092 --topic test-after-install --from-beginning install succeed. Processed a total of 1 messages kafka-console-consumer.sh --bootstrap-server 192.168.1.102:9092 --topic test-after-install --from-beginning install succeed. Processed a total of 1 messages ```
zhangky
2022年1月5日 15:37
分享文档
收藏文档
上一篇
下一篇
微信扫一扫
复制链接
手机扫一扫进行分享
复制链接
关于 MrDoc
觅思文档MrDoc
是
州的先生
开发并开源的在线文档系统,其适合作为个人和小型团队的云笔记、文档和知识库管理工具。
如果觅思文档给你或你的团队带来了帮助,欢迎对作者进行一些打赏捐助,这将有力支持作者持续投入精力更新和维护觅思文档,感谢你的捐助!
>>>捐助鸣谢列表
微信
支付宝
QQ
PayPal
Markdown文件
分享
链接
类型
密码
更新密码