Quantcast
Channel: Apache Timeline
Viewing all 5648 articles
Browse latest View live

Adding Routes at the runtime-cannot add multiple consumer to the same endpoint

$
0
0
Hi,
We have a requirement where in we have to add the routes at the runtime to
camel-context which is already running.

The following piece of code works for the first time, but when i execute it
for the second time it causing mulitple consumer error.

code:
public void addroutestocontext(CamelContext camelContext){
String Rqendpoint=direct:test1;
String Rsendpoint=direct:test2;
MessageRouter router=new MessageRouter();
router.setRqendpoint(Rqendpoint);
router.setRsendpoint(Rsendpoint);
*camelContext.addRoutes(router);*

producer.asyncRequestBodyAndHeaders(,Rqendpointbody, headers);

.......
.....

Issue:Cannot add a 2nd consumer to the same endpoint. Endpoint
Endpoint[direct://test1] only allows one consumer

Please provide me a solution.
Thanks a lot.

Error with Lucene

$
0
0
Hi all,

I'm using for few days apache flume with elasticsearch.

My source is a JMS queue
My sink is elasticsearch

In the flume's libdir, I copy activemq-all.jar and the
elasticsearch.0.90.10.jar

My conf file :

a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source

a1.sources.r1.type = jms
a1.sources.r1.initialContextFactory =
org.apache.activemq.jndi.ActiveMQInitialContextFactory
a1.sources.r1.providerURL = tcp://my_source:61616
a1.sources.r1.destinationName = foo
a1.sources.r1.destinationType = QUEUE

# Describe the sink
a1.sinks.k1.type = elasticsearch
a1.sinks.k1.hostNames = localhost:9300
a1.sinks.k1.batchSize = 500
a1.sinks.k1.ttl = 5
a1.sinks.k1.indexName = foo

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

Apache Flume send me an error at startup :

2014-02-07 08:18:08,620 (lifecycleSupervisor-1-1) [ERROR -
org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)]
Unable to start SinkRunner: {
policy:org.apache.flume.sink.DefaultSinkProcessor [ at ] 1058c283 counterGroup:{
name:null counters:{} } } - Exception follows.
java.lang.NoSuchFieldError: org/apache/lucene/util/Version.LUCENE_44
at org.elasticsearch.Version.<clinit>(Version.java:130)
at java.lang.J9VMInternals.initializeImpl(Native Method)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:237)
at
org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:164)
at
org.elasticsearch.client.transport.TransportClient.<init>(TransportClient.java:120)
at
org.apache.flume.sink.elasticsearch.ElasticSearchSink.openClient(ElasticSearchSink.java:371)
at
org.apache.flume.sink.elasticsearch.ElasticSearchSink.openConnection(ElasticSearchSink.java:351)
at
org.apache.flume.sink.elasticsearch.ElasticSearchSink.start(ElasticSearchSink.java:326)
at
org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
at
org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:482)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:315)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:189)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1156)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:626)
at java.lang.Thread.run(Thread.java:804)

Thanks for your help

Sylvain

Other Component than link in TabbedPanel's newLink

$
0
0
Hey,

I'm trying to use a Component which itself contains a Component in the TabbedPanel's newLink method.
Unfortunately that's not allowed .... Cause "... only raw markup is allowed ..."

Any ideas how can still achieve this?

Code:

final BootstrapTabbedPanel<ConfigTypeTab> configTypes = new BootstrapTabbedPanel<ConfigTypeTab>("configTypes",
tabs) {

@Override
protected WebMarkupContainer newLink(final String linkId, final int index) {
IModel<String> label = Model.of(tabs.get(index).getConfigType().getObject().getName());
SplitButton splitButton = new SplitButton(linkId, label) {

@Override
protected AbstractLink newBaseButton(String markupId, IModel<String> labelModel,
IModel<IconType> iconTypeModel) {
return new Link<Void>(markupId) {

private static final long serialVersionUID = 1L;

@Override
public void onClick() {
setSelectedTab(index);

};

@Override
protected List<AbstractLink> newSubMenuButtons(String buttonMarkupId) {
List<AbstractLink> subMenuLinks = new ArrayList<AbstractLink>();
subMenuLinks.add(new AjaxLink(buttonMarkupId) {

@Override
public void onComponentTagBody(MarkupStream markupStream,
ComponentTag openTag) {
replaceComponentTagBody(markupStream, openTag, "Edit");

@Override
public void onClick(AjaxRequestTarget target) {
LOG.debug(tabs.get(index).getConfigType().getObject().toString());

});
return subMenuLinks;

};
return splitButton;

};

Marvin Richter
Software Developer
T +49 (0) 30 69 538 1099
M +49 (0) 174 744 4991
marvin.richter [ at ] jestadigital.com<mailto:marvin.richter [ at ] jestadigital.com

JESTA DIGITAL GmbH Karl-Liebknecht-Str. 32 10178 Berlin, Germany
Gesellschaft mit beschränkter Haftung mit Sitz in Berlin
HRB Nr. 97990 Amtsgericht Charlottenburg
Geschäftsführer: Markus Peuler

Removing default "soap" namespace prefix from SOAP request

$
0
0
Hello,

i try to remove the the default namespace prefix ("soap") from the
outgoing SOAP message. But I
don't know where to add the property "soap.env.ns.map". I've created a
sample SOAP Client with the
wsdl2java tool from Apache CXF and the client flag. My outgoing SOAP
message have to look like that:

<Envelope xmlns="http://schemas.xmlsoap.org/soap/envelope/">
<Header>
....
</Header>
<Body>
</Body>
</Envelope>

But at the moment it looks like that:
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
....
</soap:Header>
<soap:Body>
</soap:Body>
</soap:Envelope>

I know that this isn't the wrong format but the server accepts only message
without any prefix for the
"http://schemas.xmlsoap.org/soap/envelope/" namespace.

Regards

Philipp

Web trace logs

$
0
0
Hello

I want to simulate realistic Web traffic using Web trace logs for my
research. I found some Web trace logs available on ITA website at
http://ita.ee.lbl.gov/html/traces.html

But the trace data are too old (before 2000). Does anyone know any new
Web trace logs publicly available?

Thanks

JM

Out of memory (Permgen?) + JMX attribute Health/CurrentStatus?

$
0
0
I have activemq configured with a durable subscriber to a topic backed by
oracle. I left a publisher running at a rate of 100 msgs/sec; each msg is
1K. The msgs expire in 30 secs. The durable subscriber disconnected so, as
expected, messages queued up in the db. After about 25 minutes I got an
OutOfMemory error in my log; eventually the broker became unresponsive.

According to jstat permgen space usage is at 96% (!). There are no permgen
messages in the logs though.

My questions:

1. I had lowered the max heap space to 512M as a test -- the jstat output
makes me thing this is ok, but maybe I need to boost permgen space?
2. How reliable/useful is this Health/CurrentStatus mbean attribute? (see
below)
3. I know it is a longshot given the details I can provide, but any clues or
tips on what went wrong for me?

I'll see if I can reproduce the problem.

== More details

There are 320k messages loitering in the database; I suspect that expiration
is not working properly. Any special tricks to enabling expiration?

There are no ERROR messages in the logs, but there are WARN level msgs.

activemq.log:2014-02-06 22:58:41,527 | WARN | JDBC Failure: Protocol
violation: [ 0, ] | org.apache.activemq.store.jdbc.JDBCPersistenceAdapter |
ActiveMQ Broker[localhost] Scheduler
...
activemq.log:2014-02-06 22:58:12,295 | WARN | Failed to browse Topic:
fooTopic | org.apache.activemq.broker.region.Topic | ActiveMQ
Broker[localhost] Scheduler

There are some messages in the log but they do not have the log4j format so
I'm not sure the application generated them:

- java.lang.OutOfMemoryError: GC overhead limit exceeded
- java.lang.OutOfMemoryError: Java heap space

"jstat -gc 1000" shows a static situation and (to me) a healthy looking
heap, but permgen space usage is at 96% (!). There are no permgen messages
in the logs though.

S0C S1C S0U S1U EC EU OC OU PC
PU YGC YGCT FGC FGCT GCT
33536.0 32768.0 0.0 0.0 108544.0 16046.8 348160.0 286152.0
24576.0 23536.2 1943 71.415 7421 8187.680 8259.095

I was expecting to use the JMX Health MBean attribute "CurrentStatus" as a
healthcheck datapoint for our monitoring system. Today though I managed to
inadvertently induce an out of memory condition in activemq that seems to
have also led to a JDBC failure. Connection attempts to the broker result
in a "javax.jms.JMSException: Wire format negotiation timeout: peer did not
send his wire format". Persistent messages are lingering in the jdbc
datastore even though the messages are all expired. The RESTful endpoint is
unresponsive.

I used jmxterm to gain access to the activemq mbeans. Checking the "Health"
mbean I was expecting to see something like a "Critical" status in the
CurrentStatus attribute, but instead it says "Good".

How reliable/useful is this Health/CurrentStatus mbean attribute?

Here is my jmxterm output:

$>bean org.apache.activemq:brokerName=localhost,service=Health,type=Broker
#bean is set to
org.apache.activemq:brokerName=localhost,service=Health,type=Broker
$>info
#mbean = org.apache.activemq:brokerName=localhost,service=Health,type=Broker
#class name = org.apache.activemq.broker.jmx.HealthView
# attributes
%0 - CurrentStatus (java.lang.String, r)
# operations
%0 - javax.management.openmbean.TabularData health()
%1 - java.util.List healthList()
#there's no notifications
$>get CurrentStatus
#mbean =
org.apache.activemq:brokerName=localhost,service=Health,type=Broker:
CurrentStatus = Good;

Just a job offer

$
0
0
Hi Flex*ing Folks,

please take a look at this -> http://www.dallmeier.com/de/unternehmen/karriere/offene-stellen/anwendungsentwickler-wm-apache-flex-adobe-air.html

Have a nice Weekend,
Michael

kafka_2.8.0-0.8.0 jar in maven repository is empty?

$
0
0
$ curl -L http://repo.maven.apache.org/maven2/org/apache/kafka/kafka_2.8.0/0.8.0/kafka_2.8.0-0.8.0.jar

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5189 100 5189 0 0 12631 0 --:--:-- --:--:-- --:--:-- 158k

$ ls -lh
-rw-rw-r-- 1 carllerche staff 5.1K Feb 7 11:41 kafka_2.8.0-0.8.0.jar

$ jar tf kafka_2.8.0-0.8.0.jar
META-INF/MANIFEST.MF
LICENSE
NOTICE

Create Avro from bytes, not by fields

$
0
0
Hi all,

Some context (not an expert Java programmer, and just starting with
AVRO/Flume):

I need to transfer avro files from different servers to HDFS I am trying to
use Flume to do it.
I have a Flume spooldir source (reading the avro files) with an avro sink
and avro sink with a HDFS sink. Like this:

servers | hadoop
spooldir src -> avro sink --------> avro src -> hdfs

When Flume spooldir deserialize the avro files creates an flume event with
two fields: 1) header contains the schema; 2) and in the body field has the
binary Avro record data, not including the schema or the rest of the
container file elements. See the flume docs:
http://flume.apache.org/FlumeUserGuide.html#avro

So the avro sink creates an avro file like this:

{"headers": {"flume.avro.schema.literal":
"{\"type\":\"record\",\"name\":\"User\",\"namespace\":\"example.avro\",\"fields\":[{\"name\":\"name\",\"type\":\"string\"},{\"name\":\"favorite_number\",\"type\":[\"int\",\"null\"]},{\"name\":\"favorite_color\",\"type\":[\"string\",\"null\"]}]}"},
"body": {"bytes": "{BYTES}"}}

So now I am trying to write a serializer since flume only includes an
FlumeEvent serializer creating avro files like the one above, not the
original avro files on the servers.

I am almost there, I got the schema from the header field and the bytes
from the body field.
But now I need to create write the AVRO file based on the bytes, not the
values from the fields, I cannot do: r.put("field", "value") since I don't
have the values, just the bytes.

This is the code:

File file = TESTFILE;

DatumReader<GenericRecord> datumReader = new
GenericDatumReader<GenericRecord>();
DataFileReader<GenericRecord> dataFileReader = new
DataFileReader<GenericRecord>(file, datumReader);
GenericRecord user = null;
while (dataFileReader.hasNext()) {
user = dataFileReader.next(user);

Map headers = (Map) user.get("headers");

Utf8 schemaHeaderKey = new Utf8("flume.avro.schema.literal");
String schema = headers.get(schemaHeaderKey).toString();

ByteBuffer body = (ByteBuffer) user.get("body");

// Writing...
Schema.Parser parser = new Schema.Parser();
Schema schemaSimpleWrapper = parser.parse(schema);
GenericRecord r = new GenericData.Record(schemaSimpleWrapper);

// NOT SURE WHAT COMES NEXT

Is possible to actually create the AVRO files from the value bytes?

I appreciate any help.

Thanks,
Daniel

General login page for all users?

$
0
0
Hello, does a general login page exists for all users to access their apps?
The main page "/webtools/control/main" has a login link that sends me to the
admin login page "webtools/control/checkLogin" BUT only the admin can login
here... all other users get an error.

Is there a login page for all? Where anyone can login to access their
specific apps?

For example in Opentaps, once you log in with any user, then you can see the
apps you have access to. So what's the best way to achieve this in Ofbiz?
(and please don't say the best way is to give each user the specific app URL
- i don't find that very user friendly :p )

Also, what's the purpose of the main page at "/webtools/control/main" only
thing there is a login link that simply sends you to the admin login page...
How can I modify this page, and is it possibly get it to act as the main
login page as explained above? I will have to modify this page, because when
I log out it sends me there.

Thank you for taking the time to read this :)

Closed issue CAMEL-6628 in 2.12.0 doesn't fix the problem!

$
0
0
Hi guys,

Back in Aug 2013 Claus implemented a change to allow event notifications in
the producer template to be turned on or off:
https://issues.apache.org/jira/browse/CAMEL-6628
<https://issues.apache.org/jira/browse/CAMEL-6628>

But in fact this change isn't quite right. The test case below illustrates
the problem.

The problem seems to be that the ProducerTemplate uses a producer cache, and
both the producer and it's cache are firing events (if events are enabled),
when in fact only one of them should be firing the event.

So with notifications enabled on the ProducerTemplate I get one too many
events, while with notifications disabled I get one event too few!

The fix seems to be either remove the event firing from the producer cache,
or add a new method to enable/disable notifications for the producer cache
(i.e. a new method:
ProducerTemplate::setCacheEventNotifierEnabled(boolean)).

import java.util.EventObject;

import org.apache.camel.CamelContext;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import org.apache.camel.impl.DefaultExchange;
import org.apache.camel.management.event.ExchangeSentEvent;
import org.apache.camel.support.EventNotifierSupport;
import org.junit.Assert;
import org.junit.Test;

/**
* @author McBennettP

* I just created this test case to post to the Camel user list - looks like
* a bug to me...!?
*/
public class SentEventTest {
private int firedEventCount = 0;

@Test
public void testCamel() throws Exception {
CamelContext context = new DefaultCamelContext();

context.addRoutes(new RouteBuilder() {
public void configure() {
from("direct://testRoute1")
.process(new Processor() {
public void process(Exchange exchange) {};
});

from("direct://testRoute2")
.process(new Processor() {
public void process(Exchange exchange) {};
});

from("direct://MASTER-ROUTE")
.to("direct://testRoute1")
.to("direct://testRoute2");

});

context.getManagementStrategy().addEventNotifier(
new EventNotifierSupport() {
public void notify(final EventObject event) throws
Exception {
System.out.println("Count: " + (++firedEventCount) +
", event: "
+ ((ExchangeSentEvent)
event).getEndpoint());

public boolean isEnabled(final EventObject event) {
return (event instanceof ExchangeSentEvent);

});

context.start();
ProducerTemplate producer = context.createProducerTemplate();

producer.setEventNotifierEnabled(true);
Exchange testExchange = new DefaultExchange(context);
producer.send("direct://MASTER-ROUTE", testExchange);
*Assert.assertEquals("This should REALLY be 3!!", 4,
firedEventCount);*

producer.setEventNotifierEnabled(false);
producer.send("direct://MASTER-ROUTE", testExchange);
producer.send("direct://MASTER-ROUTE", testExchange);
*Assert.assertEquals("This should REALLY be 9!", 8,
firedEventCount);*

context.stop();

load from cassandra

$
0
0
Hello, I'm having a hard time using Pig to extract data from Cassandra.

Cassandra: [cqlsh 4.1.0 | Cassandra 2.0.4 | CQL spec 3.1.1 | Thrift
protocol 19.39.0]

Hadoop (Cloudera): 2.0.0+1518

Map Reduce: v2 (Yarn)

Pig: Apache Pig version 0.11.0-cdh4.5.0

I can use pig fine to run mapreduce jobs.

The test schema is very simple:

cqlsh:main> create table a (id int, name varchar, primary key (id));

cqlsh:main> insert into a (id, name) values (1, 'blah');

cqlsh:main> select * from a;

id | name

----+------

1 | blah

(1 rows)

The problem I run into is when I'm trying to extract data from Cassandra:

bash-4.2$ ./apache-cassandra-2.0.4-src/examples/pig/bin/pig_cassandra -x
local

Using /home/hdfs/pig-0.12.0-src/pig-withouthadoop.jar.

2014-02-07 17:09:18,948 [main] INFO org.apache.pig.Main - Apache Pig
version 0.10.0 (r1328203) compiled Apr 20 2012, 00:33:25

2014-02-07 17:09:18,949 [main] INFO org.apache.pig.Main - Logging error
messages to: /home/hdfs/pig_1391810958945.log

2014-02-07 17:09:19,373 [main] INFO
org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting
to hadoop file system at: file:///

2014-02-07 17:09:19,377 [main] WARN org.apache.hadoop.conf.Configuration -
mapred.used.genericoptionsparser is deprecated. Instead, use
mapreduce.client.genericoptionsparser.used

2014-02-07 17:09:19,394 [main] WARN org.apache.hadoop.conf.Configuration -
fs.default.name is deprecated. Instead, use fs.defaultFS

2014-02-07 17:09:19,395 [main] WARN org.apache.hadoop.conf.Configuration -
mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in
[jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in
[jar:file:/home/hdfs/apache-cassandra-2.0.4-src/lib/slf4j-log4j12-1.7.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.

2014-02-07 17:09:20,026 [main] WARN org.apache.hadoop.conf.Configuration -
io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum

2014-02-07 17:09:20,030 [main] WARN org.apache.hadoop.conf.Configuration -
fs.default.name is deprecated. Instead, use fs.defaultFS

2014-02-07 17:09:20,030 [main] WARN org.apache.hadoop.conf.Configuration -
mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address

grunt> rows = LOAD 'cql://main/a' USING CqlStorage();

grunt> describe rows;

rows: {id: int,name: chararray}

Pig can get the schema out the table.

However trying to dump the data is when it all goes south:

grunt> data = foreach rows generate $1;

grunt> dump data;

2014-02-07 17:09:47,347 [main] INFO
org.apache.pig.tools.pigstats.ScriptState - Pig features used in the
script: UNKNOWN

2014-02-07 17:09:47,416 [main] INFO
org.apache.pig.newplan.logical.rules.ColumnPruneVisitor - Columns pruned
for rows: $0

2014-02-07 17:09:47,548 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler -
File concatenation threshold: 100 optimistic? false

2014-02-07 17:09:47,589 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer
- MR plan size before optimization: 1

2014-02-07 17:09:47,589 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer
- MR plan size after optimization: 1

2014-02-07 17:09:47,960 [main] WARN org.apache.hadoop.conf.Configuration -
session.id is deprecated. Instead, use dfs.metrics.session-id

2014-02-07 17:09:47,968 [main] INFO
org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with
processName=JobTracker, sessionId=

2014-02-07 17:09:48,055 [main] INFO
org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added
to the job

2014-02-07 17:09:48,075 [main] WARN org.apache.hadoop.conf.Configuration -
mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use
mapreduce.reduce.markreset.buffer.percent

2014-02-07 17:09:48,075 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler
- mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3

2014-02-07 17:09:48,075 [main] WARN org.apache.hadoop.conf.Configuration -
mapred.output.compress is deprecated. Instead, use
mapreduce.output.fileoutputformat.compress

2014-02-07 17:09:48,206 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler
- Setting up single store job

2014-02-07 17:09:48,330 [main] ERROR org.apache.pig.tools.grunt.Grunt -
ERROR 2998: Unhandled internal error.
org.apache.hadoop.mapred.jobcontrol.JobControl.addJob(Lorg/apache/hadoop/mapred/jobcontrol/Job;)Ljava/lang/String;

Details at logfile: /home/hdfs/pig_1391810958945.log

The log says:

Pig Stack Trace

ERROR 2998: Unhandled internal error.
org.apache.hadoop.mapred.jobcontrol.JobControl.addJob(Lorg/apache/hadoop/mapred/jobcontrol/Job;)Ljava/lang/String;

java.lang.NoSuchMethodError:
org.apache.hadoop.mapred.jobcontrol.JobControl.addJob(Lorg/apache/hadoop/mapred/jobcontrol/Job;)Ljava/lang/String;

at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:261)

at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:180)

at org.apache.pig.PigServer.launchPlan(PigServer.java:1270)

at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1255)

at org.apache.pig.PigServer.storeEx(PigServer.java:952)

at org.apache.pig.PigServer.store(PigServer.java:919)

at org.apache.pig.PigServer.openIterator(PigServer.java:832)

at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:682)

at
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:303)

at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189)

at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165)

at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)

at org.apache.pig.Main.run(Main.java:490)

at org.apache.pig.Main.main(Main.java:111)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

I'd appreciate any help on this.

CXF component attachment support for POJO not implemented as specified

$
0
0
Hi,

the CXF description states:

http://camel.apache.org/cxf.html#CXF-AttachmentSupport
<http://camel.apache.org/cxf.html#CXF-AttachmentSupport>

Attachment Support

"/POJO Mode:

Attachments are propagated to Camel message's attachments since 2.1. So, it
is possible to retreive attachments by Camel Message API

DataHandler Message.getAttachment(String id)/"

However, /org.apache.camel.component.cxf.DefaultCxfBinding/ explicitly
excludes attachments propagation for POJOs. This makes processing legacy
SOAP with attachments more difficult.

I suggest easing the condition for POJOs that do not use MTOM:

if dataFormat is POJO and properties.mtom-enabled is false, propagate the
attachments into and from the camel Exchange.

Thank you.

Building Apache Flex apps for iOS7

$
0
0
Hi,

I just put up a new blog post clarifying how to continue to develop Apache
Flex apps for iOS7

https://blogs.apache.org/flex/entry/building_apache_flex_apps_for

Please feel free to share this link widely :-)

Thanks,
Om

rebalanceClusterClients , failover and transactions

$
0
0
Hi,
I'm facing a problem in my system due to the option rebalanceClusterClients.
As I can see when this option is enabled every time a new connection is
created a signal is sent to every connected client in the network and
every failover connection is closed and re-opened.
In my system it happens that when a producer (using failover tranport)
is inside a transaction and a new client connects then some of the
messages are dropped by KahaDB as beeing ssen as duplicates

Again, the rebalanceClusterClients option is very aggressive and
generates a lot of network/broker overhead in my system, when I have
something like 100 cons JVM , which get disconnected and reconnected
every time a new producer (which is sporadically spawned) gets in.

Can I realize a setup like this ?
- failover transport producers insiede a transaction do not
accept/silently drop the command to rebalance the cluster and reconnect
- not every client get reconnected at every new client attaches to the
cluster

thanks in advance
ActiveMQ is great

Enrico Olivelli

Reload MavenProject MavenSession and BuildPluginManager

$
0
0
How can my Moo reload/refresh MavenProject MavenSession and
BuildPluginManager components.

I need to do this after my Mojo switches the a different Git branch.

Flume Start and Stop Script

$
0
0
Does anyone know how to stop and start flume using one script in clustered
environment. It looks there is no short cut of doing so. We are actually
looking to set this as a linux service.

Thanks
Upender

Kafka : 0.8 beta --> 0.8 release upgrade

$
0
0
I understand from other posts , that is can be an in-place upgrade. Is there some documentation describing the same?

Thanks,
Aparup

ServiceFactoryBean attribute cannot be set via camel.xml

$
0
0
Hi everybody,

I configure in camel.xml a serviceFactory which should inject some dummy
security context

/<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:cxf="http://camel.apache.org/schema/cxf"
xmlns:camel="http://camel.apache.org/schema/spring"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://camel.apache.org/schema/spring
http://camel.apache.org/schema/spring/camel-spring.xsd
http://camel.apache.org/schema/cxf
http://camel.apache.org/schema/cxf/camel-cxf.xsd">

<cxf:cxfEndpoint id="cdcPersPartnerRelationship"
address="https://xxxxx/xxxx/PersPartnerRelationship/V4!NS"

wsdlURL="target/wsdl/ia-gen/ia_AY1094_provisioning_read_ch_personal_PersPartnerRelationship.wsdl">
<cxf:properties>
<entry key="dataFormat" value="PAYLOAD" />
</cxf:properties>

<cxf:serviceFactory>
<bean class="com.ubs.pts.util.DummySecurityContextServiceFactory"/>
</cxf:serviceFactory>
</cxf:cxfEndpoint> ..../

Now I get always an exception, which states that he can't write the property

/Invalid property 'serviceFactory' of bean class
[org.apache.camel.component.cxf.CxfSpringEndpoint]: Bean property
'serviceFactory' is not writable or has an invalid setter method. Does the
parameter type of the setter match the return type of the getter?/

I debug mode I found the bean class where camel wants to set the property
'serviceFactory' attributed:

*org.apache.camel.component.cxf.CxfEndpoint*.

Now analyzing the code there is no set method for serviceFactory but only
setServiceFactoryBean, what puzzles me what I'm doing wrong. Any help
appreciated.

BTW. My ServiceFactor class looks like, follows i.e. let the super class
create the service and attach the security context. Not sure if that plays
already role in the above problem.

import org.apache.cxf.frontend.ClientProxy;
import org.apache.cxf.service.Service;
import org.apache.cxf.service.factory.ReflectionServiceFactoryBean;

/public class DummySecurityContextServiceFactory extends
ReflectionServiceFactoryBean

public Service create() {
Service srv = super.create();
System.out.println("Attach Dummy Security Context");
try {

DummySecurityContext.attachDummySecurityContext(ClientProxy.getClient(srv));
return srv;
} catch (Exception e) { e.printStackTrace();}
return null;

}/

Modular Wicket Application

$
0
0
Hello all, I am new to Wicket. When I use Guice in Wicket's quickstart
everything works like a charm and it is possible to deploy it on Tomcat. But
when i use same code + write my module deploying to Tomcat ends with errors.
Catalina - http://pastebin.com/CaFGzDta
Tomcat - http://pastebin.com/prEU4RX5

wicket module pom.xml - http://pastebin.com/zF0z2MiN
domain module pom.xml - http://pastebin.com/bLkbwhsN
parent pom.xml - http://pastebin.com/hC9sLpuJ

Any ideas what I am doing wrong?
Viewing all 5648 articles
Browse latest View live




Latest Images