When selecting a Confluent Cloud plan, one of the first limits you’ll hit is the maximum connection count. Enterprise clusters allow 18,000 connections per eCKU (raised 4× in the Q3 2025 update); Standard cluster limits are lower — check the current cluster types table for the exact figure. Miscounting by even one component can push you into throttling territory.

This post gives you a precise formula for counting connections in a Spring Boot application, explains what actually drives that number, and ends with a worked example for a multi-service deployment.

The Formula#

steady-state connections per pod =
  (producer instances × brokers) +
  (total consumer threads × brokers)

Two terms. Simple — but the devil is in what counts as a “consumer thread” (see the Q&A section below).

Core Rule: TCP Connections Per Broker#

Kafka clients connect at the TCP level, not at the topic or partition level. The key insight:

A Kafka client opens one persistent TCP connection to each broker it needs to communicate with.

This means:

  • Sending to 100 topics backed by 3 brokers → 3 connections, not 100.
  • Adding more partitions → no change in connection count, as long as the broker set stays the same.
  • Deploying one more pod → connections increase linearly.

The broker count is the multiplier for everything that follows.

Term 1: Producer Connections#

A KafkaTemplate is backed by a ProducerFactory. By default, DefaultKafkaProducerFactory maintains one shared KafkaProducer instance across all calls — this is the connection-efficient default.

// One KafkaTemplate → one ProducerFactory → one KafkaProducer → N broker connections
kafkaTemplate.send("topic-a", message);
kafkaTemplate.send("topic-b", message);  // same connections reused
kafkaTemplate.send("topic-c", message);  // same connections reused

Patterns that multiply producer connections:

Pattern Effect
Single KafkaTemplate (default) 1 producer × brokers
Two KafkaTemplate beans with different factories 2 producers × brokers
producerPerThread = true on factory 1 producer per calling thread × brokers
Transactional producers 1 producer per transaction ID in the pool × brokers

For most applications, you have one KafkaTemplate1 × broker_count connections from the producer side.

Term 2: Consumer Connections#

Each @KafkaListener spins up a ConcurrentMessageListenerContainer. The number of KafkaConsumer instances it creates depends on the concurrency setting:

// concurrency = 3 → 3 independent KafkaConsumer instances
@KafkaListener(topics = "orders", concurrency = "3")
public void consume(String message) { ... }

Each KafkaConsumer instance opens connections to the brokers holding its assigned partitions, plus one connection to the group coordinator broker (which usually overlaps with an existing broker connection, so no extra TCP connection in practice).

For estimation purposes use: consumer thread count × broker count as the ceiling.

If partitions are concentrated on fewer brokers than the total, actual connections will be lower. The formula gives you the worst case.

What About AdminClient?#

You might expect KafkaAdmin (auto-configured by Spring Boot) to hold persistent connections. It doesn’t.

Looking at the KafkaAdmin source code, every method that needs an AdminClient follows a try-with-resources pattern:

// KafkaAdmin.java — every operation creates and closes AdminClient immediately
try (Admin admin = createAdmin()) {
    addOrModifyTopicsIfNeeded(admin, Arrays.asList(topics));
}

The initialize() method called at startup does the same — creates an AdminClient, checks/creates topics, then closes it in a finally block. There is no field storing a persistent AdminClient in KafkaAdmin.

You might also wonder about Spring Boot Actuator’s Kafka health check — as of Spring Boot 3.5, there is no built-in KafkaHealthIndicator. The feature was proposed and declined and has never shipped. The KafkaAdmin API docs confirm there is no persistent AdminClient field.

Bottom line: KafkaAdmin connections are transient (startup only) and do not count toward your steady-state connection total. If you implement a custom health indicator that keeps an AdminClient open, add 1 × brokers to your estimate.

Q&A: Does Adding More @KafkaListener Methods Increase Connections?#

This is the most common source of confusion. The answer depends on how the listeners are configured.

Case 1: Same group, separate @KafkaListener annotations#

@KafkaListener(topics = "topic-a", groupId = "my-group")
public void listenA(String msg) {}

@KafkaListener(topics = "topic-b", groupId = "my-group")
public void listenB(String msg) {}

Increases. Each @KafkaListener annotation creates its own MessageListenerContainer with its own KafkaConsumer instance — even if the groupId is the same. Per the Spring Kafka reference docs, when @KafkaListener is at the method level, a listener container is created for each method. The same groupId only means they participate in the same consumer group for rebalancing; they are still two separate group members with independent connections. Connections = 2 × brokers.

To truly share a single consumer across multiple topics, list them in one annotation:

// ONE container, ONE consumer → 1 × brokers connections
@KafkaListener(topics = {"topic-a", "topic-b"}, groupId = "my-group")
public void listen(String msg) {}

Case 2: Different concurrency per listener#

@KafkaListener(topics = "topic-a", concurrency = "3")
public void listenA(String msg) {}

@KafkaListener(topics = "topic-b", concurrency = "2")
public void listenB(String msg) {}

Increases. 5 independent KafkaConsumer threads → connections = 5 × brokers.

Case 3: Different groupId#

@KafkaListener(topics = "topic-a", groupId = "group-1")
public void listenA(String msg) {}

@KafkaListener(topics = "topic-a", groupId = "group-2")
public void listenB(String msg) {}

Increases. Different consumer groups are completely independent consumer instances, each with their own broker connections and coordinator connection.

Case 4: Different containerFactory#

@KafkaListener(topics = "topic-a", containerFactory = "factoryA")
public void listenA(String msg) {}

@KafkaListener(topics = "topic-b", containerFactory = "factoryB")
public void listenB(String msg) {}

Increases. Different factories mean different ConsumerFactory configurations, which means independent consumer instances.

Summary#

Scenario Connections increase? Reason
Separate @KafkaListener, same groupId Yes Each annotation creates its own container and consumer
Multiple topics in one @KafkaListener No Single container, single consumer
Higher concurrency Yes Each thread is an independent consumer
Different groupId Yes Independent consumer instance
Different containerFactory Yes Independent consumer instance

The real driver is MessageListenerContainer count × concurrency, not the number of listener methods.

Worked Example#

Setup:

  • Confluent Cloud cluster: 3 brokers
  • Spring Boot 3.5 app: 1 KafkaTemplate, one @KafkaListener with concurrency = "3"

Per pod:

producers:   1 factory  × 3 brokers =  3 connections
consumers:   3 threads  × 3 brokers =  9 connections
─────────────────────────────────────────────────────
total per pod:                        12 connections

(KafkaAdmin connections are transient — only during startup — and don’t count toward steady state.)

At scale — 40 microservices, 3 replicas each:

Assume 10 of the 40 services actually use Kafka with the above config:

10 services × 3 replicas × 12 connections = 360 connections

Well within Standard’s ~1,000 limit. Now increase concurrency to 5 on all consumers:

producers:   1 × 3 =  3
consumers:   5 × 3 = 15
─────────────────────────
per pod:           = 18

10 services × 3 replicas × 18 = 540 connections

Comfortable — but consider rolling deploys. A rolling update that briefly doubles pods pushes you to 1,080, past the Standard threshold.

How to Verify Your Actual Connection Count#

Estimates are a starting point. Always confirm with real data.

Option 1: Confluent Cloud Metrics API#

Use the Confluent Cloud Metrics API to query active_connection_count directly:

curl -u "$API_KEY:$API_SECRET" \
  --header 'Content-Type: application/json' \
  --data '{
    "aggregations": [{"metric": "io.confluent.kafka.server/active_connection_count"}],
    "filter": {
      "field": "resource.kafka.id",
      "op": "EQ",
      "value": "YOUR_CLUSTER_ID"
    },
    "granularity": "PT1M",
    "intervals": ["2026-03-12T00:00:00Z/2026-03-12T01:00:00Z"],
    "limit": 25
  }' \
  https://api.telemetry.confluent.cloud/v2/metrics/cloud/query

Returns the live total connection count for the entire cluster.

Option 2: Micrometer (per service)#

Enable Kafka metrics in application.yml:

management:
  metrics:
    enable:
      kafka: true

Query in Prometheus/Grafana to see which services contribute the most:

sum by (job) (kafka_producer_connection_count)
sum by (job) (kafka_consumer_connection_count)

Use both together: the Metrics API shows the cluster-level total; Micrometer shows which service or pod is responsible.

Choosing a Plan#

Factor Standard Enterprise
Max connections / eCKU see docs 18,000
Connection rate / eCKU see docs 500/s
Private networking (PrivateLink) No Yes
Client quota management No Yes
Upgrade path Basic → Standard (seamless). Standard → Enterprise: unconfirmed

Use Standard if: your calculated peak (including rolling deploys) stays well below 800 per eCKU and you don’t need private networking or per-client quotas.

Use Enterprise if: you’re above 800 per eCKU, need PrivateLink for security compliance, or want to apply per-service throttling limits.

Upgrade path warning: As of March 2026, Confluent officially supports seamless upgrade only from Basic to Standard (docs). Whether Standard can be upgraded to Enterprise in-place is not confirmed in the documentation — it may require creating a new Enterprise cluster and migrating your data. If there is any chance you will need Enterprise-tier features (PrivateLink, client quotas, higher connection limits) in the near future, consider starting with Enterprise from day one to avoid a costly migration later.

Enforcement timeline: Starting June 2026, Confluent plans to enforce connection limits strictly on Basic and Standard clusters — exceeding the limit will cause throttling rather than just a warning. Verify the exact date in the Confluent release notes before finalising your plan selection.

Key Takeaways#

  • Broker count — not topic or partition count — is the multiplier for all connection math.
  • KafkaAdmin connections are transient (startup only, closed via try-with-resources) — they don’t count toward steady-state totals unless you keep a custom AdminClient open.
  • Consumer threads (concurrency setting) are the biggest variable to tune.
  • Each @KafkaListener annotation creates its own container and consumer — use topics = {"a", "b"} in a single annotation to share one consumer across topics.
  • Always verify with active_connection_count from the Confluent Metrics API — real counts are often lower than worst-case estimates.

References#