I'm using the kafka.javaapi.producer.Producer class from a java client.
I'm wondering if it ever makes sense to refresh a producer by stopping it
and creating a new one, for example in response to a downstream IO error
(e.g. a broker got restarted, or a stale socket, etc.).
Or should it always be safe to rely on the producer's implementation to
manage it's pool of BlockingChannel connections, etc.
I'm also interested in trying to understand which exceptions indicate a
failed send() request might be retryable (basically anything that doesn't
involve a data-dependent problem, like a malformed message, or a message
too large, etc.).
Unfortunately, the range of Exceptions that can be thrown by the various
javaapi methods is not yet well documented. It would be nice to have some
notion of whether an exception is the result of a data error, or a
transient downstream connection error, etc.
Jason
I'm wondering if it ever makes sense to refresh a producer by stopping it
and creating a new one, for example in response to a downstream IO error
(e.g. a broker got restarted, or a stale socket, etc.).
Or should it always be safe to rely on the producer's implementation to
manage it's pool of BlockingChannel connections, etc.
I'm also interested in trying to understand which exceptions indicate a
failed send() request might be retryable (basically anything that doesn't
involve a data-dependent problem, like a malformed message, or a message
too large, etc.).
Unfortunately, the range of Exceptions that can be thrown by the various
javaapi methods is not yet well documented. It would be nice to have some
notion of whether an exception is the result of a data error, or a
transient downstream connection error, etc.
Jason