Hi,
I am running a cluster with a single broker, the performance producer
script and 3 consumers.
On a fresh start of the cluster , the producer throws this exception.
I was able to run this cluster successfully on the same topic ( test2 )
successfully the first time.
The solution( from stackoverflow ) seems to be to delete the topic data in
the broker and in zookeeper. This doesnt seem to be a viable production
solution to me. Is there a way to solve this without losing topic data ?
Also does the incidence of this problem decrease if I run more
brokers/servers ?
I see logs like these in the server.log
[2014-06-10 10:45:35,194] WARN [KafkaApi-0] Offset request with correlation
id 0 from client on partition [test2,0] failed due to Topic test2 either
doesn't exist or is in the process of being deleted (kafka.server.KafkaApis)
[2014-06-10 10:45:35,211] WARN [KafkaApi-0] Offset request with correlation
id 0 from client on partition [test2,1] failed due to Topic test2 either
doesn't exist or is in the process of being deleted (kafka.server.KafkaApis)
[2014-06-10 10:45:35,221] WARN [KafkaApi-0] Offset request with correlation
id 0 from client on partition [test2,2] failed due to Topic test2 either
doesn't exist or is in the process of being deleted (kafka.server.KafkaApis)
The exception trace is:
[2014-06-10 10:45:32,464] ERROR Error in handling batch of 200 events
(kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3
tries.
at
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
at scala.collection.immutable.Stream.foreach(Stream.scala:526)
at
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
[2014-06-10 10:45:32,464] ERROR Failed to send requests for topics test2
with correlation ids in [41,48] (kafka.producer.async.DefaultEventHandler)
[2014-06-10 10:45:32,464] WARN Error while fetching metadata
[{TopicMetadata for topic test2 ->
No partition metadata for topic test2 due to
kafka.common.LeaderNotAvailableException}] for topic [test2]: class
kafka.common.LeaderNotAvailableException
(kafka.producer.BrokerPartitionInfo)
[2014-06-10 10:45:32,464] ERROR Error in handling batch of 200 events
(kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3
tries.
at
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
at scala.collection.immutable.Stream.foreach(Stream.scala:526)
at
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
I am running a cluster with a single broker, the performance producer
script and 3 consumers.
On a fresh start of the cluster , the producer throws this exception.
I was able to run this cluster successfully on the same topic ( test2 )
successfully the first time.
The solution( from stackoverflow ) seems to be to delete the topic data in
the broker and in zookeeper. This doesnt seem to be a viable production
solution to me. Is there a way to solve this without losing topic data ?
Also does the incidence of this problem decrease if I run more
brokers/servers ?
I see logs like these in the server.log
[2014-06-10 10:45:35,194] WARN [KafkaApi-0] Offset request with correlation
id 0 from client on partition [test2,0] failed due to Topic test2 either
doesn't exist or is in the process of being deleted (kafka.server.KafkaApis)
[2014-06-10 10:45:35,211] WARN [KafkaApi-0] Offset request with correlation
id 0 from client on partition [test2,1] failed due to Topic test2 either
doesn't exist or is in the process of being deleted (kafka.server.KafkaApis)
[2014-06-10 10:45:35,221] WARN [KafkaApi-0] Offset request with correlation
id 0 from client on partition [test2,2] failed due to Topic test2 either
doesn't exist or is in the process of being deleted (kafka.server.KafkaApis)
The exception trace is:
[2014-06-10 10:45:32,464] ERROR Error in handling batch of 200 events
(kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3
tries.
at
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
at scala.collection.immutable.Stream.foreach(Stream.scala:526)
at
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
[2014-06-10 10:45:32,464] ERROR Failed to send requests for topics test2
with correlation ids in [41,48] (kafka.producer.async.DefaultEventHandler)
[2014-06-10 10:45:32,464] WARN Error while fetching metadata
[{TopicMetadata for topic test2 ->
No partition metadata for topic test2 due to
kafka.common.LeaderNotAvailableException}] for topic [test2]: class
kafka.common.LeaderNotAvailableException
(kafka.producer.BrokerPartitionInfo)
[2014-06-10 10:45:32,464] ERROR Error in handling batch of 200 events
(kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3
tries.
at
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
at scala.collection.immutable.Stream.foreach(Stream.scala:526)
at
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)