Quantcast
Channel: Apache Timeline
Viewing all articles
Browse latest Browse all 5648

Speed Up and Restore Replicated LevelDB Brokers

$
0
0
Hello All,
I'm wondering if my case is normal or not. On the broker side I've set up a
3-broker HA cluster (with an appropriate Zookeeper cluster) using replicated
LevelDB. ParanoidChecks are on, I'm using one queue, memoryUsage is 70% of
the JVM heap (which runs with the default 1GB per bin/activemq). My producer
is connecting via Stomp and just firing off 4.6KB persistent messages left
and right. My Python consumer is connecting through Stomp as well, gets a
frame, parses the message's data into JSON, gets some fields I added, waits
1ms (simulated processing time), then sends the relevant info on its way to
another cluster in another persistent message and acks the original. Then it
receives another frame, etc. etc.

The problem is that I'm only seeing about 86 msgs/sec on this consumer. I
would have expected more, at least 100. Is it reasonable to see only 86
msgs/sec with my current set up? It's pretty conservative: LevelDB
replication, paranoid checks, persistent messaging, 4.6KB message sizes. Is
there anything I can do to speed things up? I'm not running into disk usage
problems, so I think storeUsage and tempUsage are set correctly (950GB and
50GB respectively. I have a large mount for data). I've tried turning on and
off producer flow control (apparently stomp libraries ignore it anyway) and
a couple other things I've found around the forums and blogs to no avail.
The one thing I haven't yet tried is adding:
<pendingQueuePolicy>
<vmQueueCursor />
</pendingQueuePolicy>
I guess it's using the default storeCursor now. Can I use vmQueueCursor with
LevelDB? Would it make a different in speed? Any help or ideas are
appreciated.

Also, real quick, I've noticed one of my brokers' LevelDB database files
have corrupted after testing failover. When it's the master, this message
gets spammed to the logs:
"2014-09-05 15:57:29,979 | WARN | No reader available for position: 0,
log_infos:
{21181632651=LogInfo(/data/apache-activemq-5.10.0/activemq-data/00000004ee86108b.log,21181632651,104860010),
[...]"

Thankfully paranoidChecks works and isn't replicating, but I'm wondering if
there's any way to save it. That broker's just shut down for now. Should I
clear the activemq-data directory of the leveldb files and start it back up?
Will it sync up with the other two, or will the other two wipe their queues
because of this one?

Thanks a lot!

Viewing all articles
Browse latest Browse all 5648

Trending Articles