阅读 96

docker搭建Kafka集群及监控、可视化部署实战

下载zookeeper镜像

docker pull wurstmeister/zookeeper

下载kafka镜像

docker pull wurstmeister/kafka

启动zk镜像生成容器

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2  --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime wurstmeister/zookeeper

启动kafka1镜像生成容器

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name kafka  -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.31.131:2181   -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.31.131:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092   -v /etc/localtime:/etc/localtime wurstmeister/kafka

启动kafka2镜像生成容器

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name kafka  -p 9093:9093 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=192.168.31.131:2181   -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.31.131:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093   -v /etc/localtime:/etc/localtime wurstmeister/kafka

查看docker进程

docker ps -a

向kafka docker中拷贝测试数据日志文件

docker cp /home/test/test.log kafka:/opt

进入kafka docker进程中,就可以使用命令操作kafka

docker exec -it kafka bash

进入kafka docker的进程中,执行命令向kafka写入test.log的测试数据

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < /opt/test.log

用代码操作kafka
生产消息

public class KafkaProducerService {
    public static Properties props = new Properties();
    public final static String topic = "test";
    static {
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.31.131:9092,192.168.31.131:9093");
        props.put(ProducerConfig.ACKS_CONFIG,"all");
        props.put(ProducerConfig.RETRIES_CONFIG,"3");
        props.put(ProducerConfig.BATCH_SIZE_CONFIG,"16384");
        props.put(ProducerConfig.LINGER_MS_CONFIG,"1");
        props.put(ProducerConfig.BUFFER_MEMORY_CONFIG,"33554432");
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
    }
    public static Runnable runnable = () -> {
        try {
            Producer<String,String> producer = new KafkaProducer<>(props);
            for(int i=0;i<1000;i++){
                ProducerRecord<String,String> record =
                        new ProducerRecord<>(topic,"key-"+i,"kafka-value-"+i);
                producer.send(record, (recordMetadata, e) -> {
                      if (e==null){
                        System.out.println("消息发送成功");
                        System.out.println("partition : "+recordMetadata.partition()+" , offset : "+recordMetadata.offset()+",topic"+recordMetadata.topic());
                    }else {
                        System.out.println("消息发送失败");
                    }
                });
            }
            // 所有的通道打开都需要关闭
            producer.close();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

    };
      public static void runService() {
        int producer_num = 10;
        ExecutorService executor = Executors.newFixedThreadPool(producer_num);
        for (int i=0;i<producer_num;i++){
            executor.submit(runnable);
        }
    }
}

消费消息

@Slf4j
public class KafkaConsumerService {
    public static Properties props = new Properties();
    public final static String topic = "test";
    static {
        props.put("bootstrap.servers","192.168.31.131:9092,192.168.31.131:9093");
        props.put("group.id", "test_consumer");
        props.put("enable.auto.commit", "true");
        props.put("key.deserializer", StringDeserializer.class.getName());
        props.put("value.deserializer", StringDeserializer.class.getName());
        props.put("auto.offset.reset", "latest");
        props.put("deserializer.encoding", "UTF-8");
    }

   public static Runnable runnable = () -> {
            KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
            consumer.subscribe(Arrays.asList(topic));
            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(10000);
                records.partitions().forEach(topicPartition -> {
                    List<ConsumerRecord<String, String>> partitionRecords = records.records(topicPartition);
                    partitionRecords.forEach(record -> {
                        log.info("kafka的消费日志{}",record.toString());
                    });
                });
            }
    };
     public static void runService() {
        int producer_num = 2;
        ExecutorService executor = Executors.newFixedThreadPool(producer_num);
        for (int i=0;i<producer_num;i++){
            executor.submit(runnable);
        }
    }
}

kafka可视化工具 offsetexplorer

下载地址:http://www.kafkatool.com/download.html

kafka监控工具 Kafka Eagle

下载地址:http://download.kafka-eagle.org/
解压出来的路径:/usr/local/kafka-eagle-web-2.0.6
修改配置

vim /usr/local/kafka-eagle-web-2.0.6/conf/system-config.properties

修改的地方是cluster1.zk.list和kafka.eagle.url

kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=192.168.31.131:2181
....
kafka.eagle.webui.port=8048
kafka.eagle.url=jdbc:sqlite:/usr/local/kafka-eagle-web-2.0.6/db/ke.db

添加环境变量

vim ~/.bash_profile

export KE_HOME=/usr/local/kafka-eagle-web-2.0.6
export PATH=$KE_HOME/bin:$PATH

source ~/.bash_profile

进入kafka docker中修改kafka-server-sta

docker exec -it kafka bash
docker exec -it kafka2 bash
cd /opt/kafka_2.13-2.7.0/bin/
vim kafka-server-start.sh

添加配置

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    # 这里的端口不一定非要设置成9999,端口只要可用,均可。
    export JMX_PORT="9999" 
    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi

启动程序

chmod a+x /usr/local/kafka-eagle-web-2.0.6/bin/*
./ke.sh start

作者:技术只适用于干活

原文链接:https://www.jianshu.com/p/7ccf0a316676

文章分类
后端
版权声明:本站是系统测试站点,无实际运营。本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 XXXXXXo@163.com 举报,一经查实,本站将立刻删除。
相关推荐