kafka 4.0.0 (用kraft模式) 1.安装kafka 需要有jdk环境 1.1修改配置文件 1.2每个节点执行一下(初始化 Kafka 数据目录) 1.3启动集群 1.4测试 配置优化…
kafka_4.0.0_(用kraft模式)
kafka 4.0.0 (用kraft模式)
1.安装kafka 需要有jdk环境
1.1修改配置文件
1 2 3 4 5 6 7 8 9 10 11 12
| node.id=1 log.dirs=/data/kafka-logs
listeners=PLAINTEXT://192.168.85.128:9092,CONTROLLER://192.168.85.128:9093 controller.listener.names=CONTROLLER inter.broker.listener.name=PLAINTEXT listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
process.roles=broker,controller controller.quorum.voters=1@192.168.85.128:9093,2@192.168.85.129:9093,3@192.168.85.130:9093
|
1.2每个节点执行一下(初始化 Kafka 数据目录)
1 2 3 4 5 6 7
|
node.id=1 directory.id=T-V8hYIPF4f6O3aJf0dB7A version=1 cluster.id=cluster-id-123
|
1.3启动集群
1.4测试
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| 出现topic或者没有报错则为成功
--topic: 指定要创建的主题名称。
--partitions: 指定主题的分区数量。
--replication-factor: 指定副本因子。
启动生产者
启动消费者
|
配置优化
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109
|
broker.id=1
listeners=PLAINTEXT://0.0.0.0:9092 advertised.listeners=PLAINTEXT://kafka-broker-1:9092
log.dirs=/mnt/data/kafka-logs,/mnt/data2/kafka-logs
num.partitions=3
acks=all
fetch.max.bytes=52428800
num.network.threads=3
num.io.threads=8
queued.max.requests=500
log.segment.bytes=1073741824
log.roll.ms=86400000
log.cleanup.policy=delete
log.retention.bytes=10737418240
log.retention.hours=168
min.insync.replicas=2
replica.fetch.max.bytes=10485760
replica.fetch.wait.max.ms=500
socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576
connections.max.idle.ms=600000
max.connections=10000
max.request.size=10485760
batch.size=16384
compression.type=snappy
transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2
message.max.bytes=1048576
export KAFKA_JVM_PERFORMANCE_OPTS="-XX:+UseG1GC -XX:MaxGCPauseMillis=200"
export KAFKA_HEAP_OPTS="-Xms4G -Xmx4G"
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT listener.name.internals=PLAINTEXT://0.0.0.0:9092
num.recovery.threads.per.data.dir=1
log.cleaner.min.cleanable.ratio=0.5
zookeeper.connect=192.168.85.128:2181,192.168.85.129:2181,192.168.85.130:2181 zookeeper.connection.timeout.ms=6000
socket.connection.setup.timeout.ms=10000 socket.connection.setup.timeout.max.ms=30000
queued.max.requests=500
log.max.message.bytes=1000000
|