FAQ-实时传输-mysql2kafka,源端数据量太大时,tm频繁重启
更新时间: 2025-09-28 20:16:54
阅读 42
FAQ-实时传输-mysql2kafka,源端数据量太大时,tm频繁重启
问题描述/异常栈
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) ~[?:1.8.0_152]
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2173) ~[?:1.8.0_152]
at org.apache.kafka.clients.producer.internals.BufferPool.allocate(BufferPool.java:143) ~[flink-metrics-kafka_2.12-1.11.5.jar:1.11.5]
at org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:218) ~[flink-metrics-kafka_2.12-1.11.5.jar:1.11.5]
发现版本
所有版本
解决方案
buffer.memory的数值要设置大于max.request.size,推荐设置为5倍例如:
target.sink.options.properties.max.request.size = 104858800
target.sink.options.properties.buffer.memory = 524288000
问题原因
buffer memory不足导致tm重启
作者:曹俊
文档反馈
以上内容对您是否有帮助?