GitHub - confluentinc/confluent-kafka-go: Confluent's Apache Kafka Golang client (original) (raw)
Confluent's Golang Client for Apache KafkaTM
confluent-kafka-go is Confluent's Golang client for Apache Kafka and theConfluent Platform.
Features:
- High performance - confluent-kafka-go is a lightweight wrapper aroundlibrdkafka, a finely tuned C client.
- Reliability - There are a lot of details to get right when writing an Apache Kafka client. We get them right in one place (librdkafka) and leverage this work across all of our clients (also confluent-kafka-pythonand confluent-kafka-dotnet).
- Supported - Commercial support is offered byConfluent.
- Future proof - Confluent, founded by the original creator/co-creator of Kafka, is building a streaming platformwith Apache Kafka at its core. It's high priority for us that client features keep pace with core Apache Kafka and components of the Confluent Platform.
The Golang bindings provides a high-level Producer and Consumer with support for the balanced consumer groups of Apache Kafka 0.9 and above.
See the API documentation for more information.
For a step-by-step guide on using the client see Getting Started with Apache Kafka and Golang.
Examples
High-level balanced consumer
import ( "fmt" "time"
"github.com/confluentinc/confluent-kafka-go/v2/kafka")
func main() {
c, err := kafka.NewConsumer(&kafka.ConfigMap{
"bootstrap.servers": "localhost",
"group.id": "myGroup",
"auto.offset.reset": "earliest",
})
if err != nil {
panic(err)
}
err = c.SubscribeTopics([]string{"myTopic", "^aRegex.*[Tt]opic"}, nil)
if err != nil {
panic(err)
}
// A signal handler or similar could be used to set this to false to break the loop.
run := true
for run {
msg, err := c.ReadMessage(time.Second)
if err == nil {
fmt.Printf("Message on %s: %s\n", msg.TopicPartition, string(msg.Value))
} else if !err.(kafka.Error).IsTimeout() {
// The client will automatically try to recover from all errors.
// Timeout is not considered an error because it is raised by
// ReadMessage in absence of messages.
fmt.Printf("Consumer error: %v (%v)\n", err, msg)
}
}
c.Close()}
Producer
import ( "fmt" "github.com/confluentinc/confluent-kafka-go/v2/kafka" )
func main() {
p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": "localhost"})
if err != nil {
panic(err)
}
defer p.Close()
// Delivery report handler for produced messages
go func() {
for e := range p.Events() {
switch ev := e.(type) {
case *kafka.Message:
if ev.TopicPartition.Error != nil {
fmt.Printf("Delivery failed: %v\n", ev.TopicPartition)
} else {
fmt.Printf("Delivered message to %v\n", ev.TopicPartition)
}
}
}
}()
// Produce messages to topic (asynchronously)
topic := "myTopic"
for _, word := range []string{"Welcome", "to", "the", "Confluent", "Kafka", "Golang", "client"} {
p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: []byte(word),
}, nil)
}
// Wait for message deliveries before shutting down
p.Flush(15 * 1000)}
More elaborate examples are available in the examples directory, including how to configure the Go client for use with Confluent Cloud.
Getting Started
Supports Go 1.17+ and librdkafka 2.12.0+.
Using Go Modules
You can use Go Modules to install confluent-kafka-go.
Import the kafka package from GitHub in your code:
import "github.com/confluentinc/confluent-kafka-go/v2/kafka"
Build your project:
If you are building for Alpine Linux (musl), -tags musl must be specified.
go build -tags musl ./...
A dependency to the latest stable version of confluent-kafka-go should be automatically added to your go.mod file.
Install the client
Manual install:
go get -u github.com/confluentinc/confluent-kafka-go/v2/kafka
Golang import:
import "github.com/confluentinc/confluent-kafka-go/v2/kafka"
librdkafka
Prebuilt librdkafka binaries are included with the Go client and librdkafka does not need to be installed separately on the build or target system. The following platforms are supported by the prebuilt librdkafka binaries:
- Mac OSX x64 and arm64
- glibc-based Linux x64 and arm64 (e.g., RedHat, Debian, CentOS, Ubuntu, etc) - without GSSAPI/Kerberos support
- musl-based Linux amd64 and arm64 (Alpine) - without GSSAPI/Kerberos support
- Windows amd64 - without GSSAPI/Kerberos support
When building your application for Alpine Linux (musl libc) you must pass-tags musl to go get, go build, etc.
CGO_ENABLED must NOT be set to 0 since the Go client is based on the C library librdkafka.
If GSSAPI/Kerberos authentication support is required you will need to install librdkafka separately, see the Installing librdkafka chapter below, and then build your Go application with -tags dynamic.
Installing librdkafka
If the bundled librdkafka build is not supported on your platform, or you need a librdkafka with GSSAPI/Kerberos support, you must install librdkafka manually on the build and target system using one of the following alternatives:
- For Debian and Ubuntu based distros, install
librdkafka-devfrom the standard repositories or using Confluent's Deb repository. - For Redhat based distros, install
librdkafka-develusing Confluent's YUM repository. - For MacOS X, install
librdkafkafrom Homebrew. You may also need to brew install pkg-config if you don't already have it:brew install librdkafka pkg-config. - For Alpine:
apk add librdkafka-dev pkgconf - For Windows: there are no official/supported packages, but static builds are included for Windows/x64. Installing from source is needed only for GSSAPI/Kerberos support.
- For source builds, see instructions below.
Build from source:
git clone https://github.com/confluentinc/librdkafka.git
cd librdkafka
./configure
make
sudo make install
After installing librdkafka you will need to build your Go application with -tags dynamic.
Note: If you use the master branch of the Go client, then you need to use the master branch of librdkafka.
confluent-kafka-go requires librdkafka v1.9.0 or later.
Static builds on Linux
Since we are using cgo, Go builds a dynamically linked library even when using the prebuilt, statically-compiled librdkafka as described in the librdkafkachapter.
For glibc based systems, if the system where the client is being compiled is different from the target system, especially when the target system is older, there is a glibc version error when trying to run the compiled client.
Unfortunately, if we try building a statically linked binary, it doesn't solve the problem, since there is no way to have truly static builds using glibc. This is because there are some functions in glibc, like getaddrinfo which need the shared version of the library even when the code is compiled statically.
One way around this is to either use a container/VM to build the binary, or install an older version of glibc on the system where the client is being compiled.
The other way is using musl to create truly static builds for Linux. To do this,install it for your system.
Static compilation command, meant to be used alongside the prebuilt librdkafka bundle:
CC=/path/to/musl-gcc go build --ldflags '-linkmode external -extldflags "-static"' -tags musl
FIPS 140-3 Compliance
This client supports FIPS 140-3 compliance for Schema Registry operations when using Go 1.24.3 or newer.
Communication between client and Schema Registry
The Schema Registry Go client can operate in FIPS 140-3 compliant mode using Go's native FIPS support:
Build with FIPS support:
GOFIPS140=inprocess go build -o myapp
Run in FIPS mode:
GODEBUG=fips140=only ./myapp
When running with GODEBUG=fips140=only, the application will use only FIPS 140-3 validated cryptographic implementations for all TLS connections to Schema Registry. The application will panic immediately if any non-FIPS-approved cryptographic operation is attempted.
API Strands
The recommended API strand is the Function-Based one, the Channel-Based one is documented in examples/legacy.
Function-Based Consumer
Messages, errors and events are polled through the consumer.Poll() function.
It has direct mapping to underlying librdkafka functionality.
Function-Based Producer
Application calls producer.Produce() to produce messages. Delivery reports are emitted on the producer.Events() or specified private channel.
Warnings
Produce()is a non-blocking call, if the internal librdkafka queue is full the call will fail and can be retried.
License
KAFKA is a registered trademark of The Apache Software Foundation and has been licensed for use by confluent-kafka-go. confluent-kafka-go has no affiliation with and is not endorsed by The Apache Software Foundation.
Developer Notes
See kafka/README
Contributions to the code, examples, documentation, et.al, are very much appreciated.
Make your changes, run gofmt, tests, etc, push your branch, create a PR, and sign the CLA.
Confluent Cloud
For a step-by-step guide on using the Golang client with Confluent Cloud see Getting Started with Apache Kafka and Golang on Confluent Developer.