【Flink异常】flink与kafka版本匹配: NetworkClient$DefaultMetadataUpdater.handleServerDisconnect

文章目录

    • 1. 异常详情
    • 2. 报错原因及解决办法
    • 3. Flink与Kafka的版本匹配

1. 异常详情

使用flink 1.11的kafka connector读取0.9版本的kafka报错:

[10:49:12:644] [Source: TableSourceScan(table=[[default_catalog, default_database, test_topic]], fields=[logtime, url, sign, scene, channel]) -> Sink: Sink(table=[default_catalog.default_database.print_table], fields=[logtime, url, sign, scene, channel]) (1/8)] [WARN] - org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.handleServerDisconnect(NetworkClient.java:1024) - [Consumer clientId=consumer-test_20201101-8, groupId=test_20201101] Bootstrap broker ******* (id: -1 rack: null) disconnected

使用flink 1.7.1的kafka connector读取0.9版本的kafka报错:

10:28:31.098 [Source: Custom Source (8/8)] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-7, groupId=test_20201110] Give up sending metadata request since no node is available
10:28:31.098 [Source: Custom Source (3/8)] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=consumer-5, groupId=test_20201110] Connection with /10.202.209.74 disconnected
java.io.EOFException: null
	at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96)
	at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:335)
	at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:296)
	at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:562)
	at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:498)
	at org.apache.kafka.common.network.Selector.poll(Selector.java:427)
	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:271)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:218)
	at org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:292)
	at org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1774)
	at org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1742)
	at org.apache.flink.streaming.connectors.kafka.internal.KafkaPartitionDiscoverer.getAllPartitionsForTopics(KafkaPartitionDiscoverer.java:77)
	at org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.discoverPartitions(AbstractPartitionDiscoverer.java:131)
	at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:473)
	at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
	at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:424)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:290)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
	at java.lang.Thread.run(Thread.java:748)

2. 报错原因及解决办法

Flink 1.11版本flink-connector-kafka仅支持kafka 10和11版本,笔者测试使用的kafka为09版本,因此报错。

相关的pom依赖


   org.apache.flink
   flink-connector-kafka_2.11
   1.11.0

3. Flink与Kafka的版本匹配

下图是flink1.7版本官方文档中关于flink与kafka版本匹配的内容:Apache Kafka Connector
【Flink异常】flink与kafka版本匹配: NetworkClient$DefaultMetadataUpdater.handleServerDisconnect_第1张图片
可知,flink-connector-kafka的通用版(即不指定具体kafka版本)支持1.0.0以后的kafka版本。低版本的kafka,如08,09等版本还是需要指定kafka版本的connector,如:flink-connector-kafka-0.9。因此,具体使用的时候要特别注意版本的匹配,否则会发生很多意想不到的错误,导致生产/消费不到数据。

你可能感兴趣的:(Flink,flink)