Quantcast
Channel: Apache Timeline
Viewing all articles
Browse latest Browse all 5648

Question on MessageSizeTooLargeException

$
0
0
Hello

I finally upgraded kafka 0.7 to kafka 0.8 and a few kafka 0.8 clusters are
being tested now.

Today, I got alerted with the following messages:

"data": {
"exceptionMessage": "Found a message larger than the maximum fetch size
of this consumer on topic nf_errors_log partition 0 at fetch offset
76736251. Increase the fetch size, or decrease the maximum message size the
broker will allow.",
"exceptionStackTrace": "kafka.common.MessageSizeTooLargeException:
Found a message larger than the maximum fetch size of this consumer on
topic nf_errors_log partition 0 at fetch offset 76736251. Increase the
fetch size, or decrease the maximum message size the broker will allow.
"exceptionType": "kafka.common.MessageSizeTooLargeException"
},
"description": "RuntimeException aborted realtime
processing[nf_errors_log]"

What I don't understand is, I am using all default properties, which means

broker's message.max.bytes is 1000000
consumer's fetch.message.max.bytes is 1024 * 1024 greater than broker's
message.max.bytes

How could this happen? I am using snappy compression.

Thank you
Best, Jae

Viewing all articles
Browse latest Browse all 5648

Trending Articles