Linux环境下安装Kafka的详细步骤

1. 前提条件

  • 操作系统:Linux(推荐使用Ubuntu或CentOS)
  • Java:JDK 8或更高版本
  • ZooKeeper:版本3.4或更高版本
  • Kafka:最新稳定版本

2. 安装Java

sudo apt-get update
sudo apt-get install default-jdk

登录后复制

3. 安装ZooKeeper

wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
tar -xvf zookeeper-3.4.14.tar.gz
cd zookeeper-3.4.14
./configure
make
sudo make install

登录后复制

4. 配置ZooKeeper

sudo mkdir /var/lib/zookeeper
sudo chown zookeeper:zookeeper /var/lib/zookeeper

登录后复制

编辑/etc/zookeeper/conf/zoo.cfg文件,并添加以下内容:

dataDir=/var/lib/zookeeper
clientPort=2181

登录后复制

启动ZooKeeper:

sudo service zookeeper start

登录后复制

5. 安装Kafka

wget https://archive.apache.org/dist/kafka/2.8.0/kafka_2.13-2.8.0.tgz
tar -xvf kafka_2.13-2.8.0.tgz
cd kafka_2.13-2.8.0

登录后复制

6. 配置Kafka

编辑/etc/kafka/server.properties文件,并添加以下内容:

broker.id=0
listeners=PLAINTEXT://:9092
zookeeper.connect=localhost:2181

登录后复制

启动Kafka:

./bin/kafka-server-start.sh config/server.properties

登录后复制

7. 创建主题

./bin/kafka-topics.sh --create --topic test --partitions 1 --replication-factor 1

登录后复制

8. 生产数据

./bin/kafka-console-producer.sh --topic test

登录后复制

9. 消费数据

./bin/kafka-console-consumer.sh --topic test --from-beginning

登录后复制

快速入门指南

1. 创建一个Producer

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class SimpleProducer {

    public static void main(String[] args) {
        Properties properties = new Properties();
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");

        KafkaProducer<String, String> producer = new KafkaProducer<>(properties);

        ProducerRecord<String, String> record = new ProducerRecord<>("test", "Hello, Kafka!");

        producer.send(record);

        producer.close();
    }
}

登录后复制

2. 创建一个Consumer

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

public class SimpleConsumer {

    public static void main(String[] args) {
        Properties properties = new Properties();
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group");
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);

        consumer.subscribe(Arrays.asList("test"));

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(100);

            for (ConsumerRecord<String, String> record : records) {
                System.out.println(record.key() + ": " + record.value());
            }
        }

        consumer.close();
    }
}

登录后复制

以上就是在Linux中快速安装Kafka并进行入门:详细步骤指南的详细内容,更多请关注小闻网其它相关文章!

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。