Hello.
Recently we encountered a stracktrace from a flume agent that is consuming
messages from a RabbitMQ via a File Channel to an HDFS-sink. The stacktrace
is below.
I am wondering if there is a way to configure the
CodedInputStream.setSizeLimit from flume configuration or if there is any
other way around this (making the messages smaller being the obvious answer
of course).
org.apache.flume.ChannelException: Take failed due to IO error
[channel=file-channel]
at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:541)
at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:350)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:724)
Caused by: com.google.protobuf.InvalidProtocolBufferException:
Protocol message was too large. May be malicious. Use
CodedInputStream.setSizeLimit() to increase the size limit.
at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:89)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:720)
at com.google.protobuf.CodedInputStream.isAtEnd(CodedInputStream.java:666)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:99)
at org.apache.flume.channel.file.proto.ProtosFactory$Put$Builder.mergeFrom(ProtosFactory.java:3437)
at org.apache.flume.channel.file.proto.ProtosFactory$Put$Builder.mergeFrom(ProtosFactory.java:3300)
at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:212)
at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:746)
at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:238)
at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:282)
at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:760)
at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:288)
at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:752)
at org.apache.flume.channel.file.proto.ProtosFactory$Put.parseDelimitedFrom(ProtosFactory.java:3257)
at org.apache.flume.channel.file.Put.readProtos(Put.java:98)
at org.apache.flume.channel.file.TransactionEventRecord.fromByteArray(TransactionEventRecord.java:204)
at org.apache.flume.channel.file.LogFileV3$RandomReader.doGet(LogFileV3.java:292)
at org.apache.flume.channel.file.LogFile$RandomReader.get(LogFile.java:436)
at org.apache.flume.channel.file.Log.get(Log.java:580)
at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:538)
Best Regards
/Thomas
Recently we encountered a stracktrace from a flume agent that is consuming
messages from a RabbitMQ via a File Channel to an HDFS-sink. The stacktrace
is below.
I am wondering if there is a way to configure the
CodedInputStream.setSizeLimit from flume configuration or if there is any
other way around this (making the messages smaller being the obvious answer
of course).
org.apache.flume.ChannelException: Take failed due to IO error
[channel=file-channel]
at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:541)
at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:350)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:724)
Caused by: com.google.protobuf.InvalidProtocolBufferException:
Protocol message was too large. May be malicious. Use
CodedInputStream.setSizeLimit() to increase the size limit.
at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:89)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:720)
at com.google.protobuf.CodedInputStream.isAtEnd(CodedInputStream.java:666)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:99)
at org.apache.flume.channel.file.proto.ProtosFactory$Put$Builder.mergeFrom(ProtosFactory.java:3437)
at org.apache.flume.channel.file.proto.ProtosFactory$Put$Builder.mergeFrom(ProtosFactory.java:3300)
at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:212)
at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:746)
at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:238)
at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:282)
at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:760)
at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:288)
at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:752)
at org.apache.flume.channel.file.proto.ProtosFactory$Put.parseDelimitedFrom(ProtosFactory.java:3257)
at org.apache.flume.channel.file.Put.readProtos(Put.java:98)
at org.apache.flume.channel.file.TransactionEventRecord.fromByteArray(TransactionEventRecord.java:204)
at org.apache.flume.channel.file.LogFileV3$RandomReader.doGet(LogFileV3.java:292)
at org.apache.flume.channel.file.LogFile$RandomReader.get(LogFile.java:436)
at org.apache.flume.channel.file.Log.get(Log.java:580)
at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:538)
Best Regards
/Thomas