Apache Kafka is a fault tolerant publish-subscribe streaming platform that lets you process streams of records as they occur. If you haven’t installed Kafka yet, see our Kafka Quickstart Tutorial to get up and running quickly.
In this post we discuss how to create a simple Kafka producer in Java.
Kafka Producer Java Code
The example below shows creating a Kafka producer object and using it to send messages to the my-topic
topic.
package net.bigdatums.kafka.producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;
public class StringProducerExample {
public static void main(String args[]) {
//properties for producer
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.IntegerSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
//create producer
Producer<Integer, String> producer = new KafkaProducer<Integer, String>(props);
//send messages to my-topic
for(int i = 0; i < 100; i++) {
ProducerRecord producerRecord = new ProducerRecord<Integer, String>("my-topic", i, "Test Message #" + Integer.toString(i));
producer.send(producerRecord);
}
//close producer
producer.close();
}
}
In this example we are only using the required producer configuration parameters, which include the bootstrap.servers
URI and the key and value Serializers. We are sending Integer keys and String values in each message which is why we use the IntegerSerializer
and StringSerializer
classes in our configuration parameters.
This example is placed inside a main()
method to make it easy to run from the command line or from within an IDE.
Running the Kafka Producer Example
In order to run this example, we need a Zookeeper server and a Kafka server running. For an example of how to do this see our Kafka Quickstart Tutorial to get up and running.
Once we have Zookeeper and Kafka running we can create the my-topic
topic:
# creating my-topic topic
$KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic my-topic
We are now ready to execute our code and send messages to Kafka from the command line or from directly within an IDE. Below is an example of running this code on the command line from our bigdatums-kafka-1.0-SNAPSHOT.jar
JAR file.
java -cp bigdatums-kafka-1.0-SNAPSHOT.jar net.bigdatums.kafka.producer.StringProducerExample
An easy way to see the messages sent by our producer is to use the kafka-console-consumer
which will read messages from a Kafka topic and print them to stdout
:
# consuming messages from Kafka
$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning
Test Message #0
Test Message #1
Test Message #2
Test Message #3
Test Message #4
Test Message #5
Test Message #6