Quantcast
Channel: Apache Timeline
Viewing all 5648 articles
Browse latest View live

when to dispose of objects for garbage collection?

$
0
0
Can someone help me identify when I need to dispose of an object, array, etc. in a typical Flex app?

For example, suppose I have an app with several states and a TitleWindow.

I know that if I declare a variable for a state, such as:

<fx:Script>
<![CDATA[
...
private var myArr:Array;
...
]]>
</fx:Script>

that when I no longer need this array (or object, etc.), I should set it to null to inform the garbage collector (GC) it's ready to be picked up. That's because, otherwise, this variable remains in memory, since the state persists throughout the life of the app.

But what if this state uses the following function:

<fx:Script>
<![CDATA[
private var summation:Number;
...
private function myFunc():void {
var anotherArr:Array=[1,2,3,4,5]
for (var i:int=0; i<anotherArr.length; i++)
summation+=anotherArr[i];

...
]]>
</fx:Script>

QUESTION 1: Do I need to manually null variable myArr2 at the end of function myFunc()? Or, will it be picked up automatically by the GC?

How about TitleWindows?

QUESTION 2: If I open a TitleWindow (e.g. popup) that contains a DataGrid, do I need to manually null its data provider when I close the TitleWindow? Or, will it be picked up automatically by the GC?

QUESTION 3: This last question also applies to a data provider for ComboBox, or an ArrayList, or an Array that is used in a TitleWindow -- do I need to null those as well upon closing the window? Or, will they be picked up automatically by the GC?

Thanks in advance for any comments.

One possible bug with image showing

$
0
0
Hi.
I have following in css file something like following
.displayingBackground{

background-image:url('../graphics/SectionForGifs/myImage.gif');

and it's OK, it shows just fine in following segments

www.mySite.com/thisPage
www.mySite.com/thatPage
www.mySite.com/*

* where denotes any sort of the page

however, it won't show up in the following terms

www.mySite.com/thatPage/SomeOtherPage
www.mySite.com/thatPage/SomeOtherPageTwo
www.mySite.com/thatPage/*

* where denotes any sort of the page

Also, I am using getImages(){} in www.mySite.com/thatPage/* kind pages, with

public String getImages(){
return "graphics/" + image.getLocation();

and it produces following output : graphics/locationOfImage.png, which is OK, however, it won't render, until I put in my firebug ../ infront of it, i.e.
../graphics/locationOfImage.png . It seems to me that Tapestry is confused with url from CSS code, and it gets to some folder, i.e. SectionForGifs( aformentioned in the CSS example ) and stucks there all the way. Any help or advise about this issue is grateful. Thank you.
how

Ruby & Avro

$
0
0
Does anyone know if the Ruby Avro gem supports code generation.. i.e. will it create instances of classes with all of the attributes of a message set?

Thanks

package error

$
0
0
I am quite unfamiliar with compiling with sbt package. What could be my issue here? It seems like the scala library is wrong, but where do I look for the scala jar?

thanks,
rob

"$ ./sbt package
[info] Loading project definition from /Users/reefedjib/Desktop/rob/comp/workspace/kafka/project
[info] Set current project to Kafka (in build file:/Users/reefedjib/Desktop/rob/comp/workspace/kafka/)
[info] Compiling 52 Scala sources and 1 Java source to /Users/reefedjib/Desktop/rob/comp/workspace/kafka/core/target/scala-2.8.0/classes...
[error] class file needed by TopicAndPartition is missing.
[error] reference type Serializable of package scala refers to nonexisting symbol.
[error] one error found
[error] (core/compile:compile) Compilation failed
[error] Total time: 5 s, completed Jun 6, 2013 11:47:38 AM"

Maintain sort within GROUP BY?

$
0
0
https://gist.github.com/rjurney/5723520

My cosine similarity UDF relies on the sorts being the same at line 10. Can
I count on that?

Header and java.util.Date

$
0
0
Hi All,

I want to set the value of an header to a java.util.Date with current value.

How can I do in XML DSL?

<setHeader headerName="messageTime">
<...whichLanguage?>which Value?</...whichLanguage?>
</setHeader>

My preference is for languages in camel-core if possible...

Thanks!!
Cristiano

inherit\extend camel context?

$
0
0
I have some boiler plate code that is needed in several camel context's, is
there a way to inherit\extend from a parent camel context? If not, is there
any other way to include boiler plate stuff, perhaps with aspects?

I have something like this copied across many routes:

<onException>
<exception>java.lang.Throwable</exception>
<redeliveryPolicy redeliveryDelay="1000" maximumRedeliveries="1" />
<bean ref="exceptionToAlertConverter" />
<to uri="direct:alerter" />
</onException>
<route id="alerterRoute">
<from uri="direct:alerter" />
<transacted ref="propagationNotSupportedTransactionPolicy" />
<to uri="someBroker://Alerter" />
</route>

Arguments for Kafka over RabbitMQ ?

$
0
0
Hi --

I am preparing to make a case for using Kafka instead of Rabbit MQ as a broker-based messaging provider. The context is similar to that of the Kafka papers and user stories: the producers publish monitoring data and logs, and a suite of subscribers consume this data (some store it, others perform computations on the event stream). The requirements are typical of this context: low-latency, high-throughput, ability to deal with bursts and operate in/across multiple data centers, etc.

I am familiar with the performance comparison between Kafka, Rabbit MQ and Active MQ from the NetDB 2011 paper<http://research.microsoft.com/en-us/um/people/srikanth/netdb11/netdb11papers/netdb11-final12.pdf>. However in the two years that passed since then the number of production Kafka installations increased, and people are using it in different ways than those imagined by Kafka's designers. In light of these experiences one can use more data points and color when contrasting to Rabbit MQ (which by the way also evolved since 2011). (And FWIW I know I am not the first one to walk this path; see for example last year's OSCON session on the State of MQ<http://lanyrd.com/2012/oscon/swrcz/>.)

I would appreciate it if you could share measurements, results, or even anecdotal evidence along these lines. How have you avoided the "let's use Rabbit MQ because everybody else does it" route when solving problems for which Kafka is a better fit?

Thanks,

-Dragos

using FQCN for interceptors fails

$
0
0
Hello Everyone,

I've been trying to write my own custom interceptor, but ran into a problem
when using a FQCN for an interceptor type. The error happens with even the
built in interceptor types when using a FQCN. Here's what the trace looks
like:

2013-06-06 14:47:18,025 (conf-file-poller-0) [ERROR -
org.apache.flume.channel.ChannelProcessor.configureInterceptors(ChannelProcessor.java:116)]
Could not instantiate Builder. Exception follows.
java.lang.InstantiationException:
org.apache.flume.interceptor.StaticInterceptor
at java.lang.Class.newInstance0(Class.java:359)
at java.lang.Class.newInstance(Class.java:327)
at
org.apache.flume.interceptor.InterceptorBuilderFactory.newInstance(InterceptorBuilderFactory.java:48)
at
org.apache.flume.channel.ChannelProcessor.configureInterceptors(ChannelProcessor.java:109)
at
org.apache.flume.channel.ChannelProcessor.configure(ChannelProcessor.java:80)
at
org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at
org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadSources(PropertiesFileConfigurationProvider.java:337)
at
org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.load(PropertiesFileConfigurationProvider.java:222)
at
org.apache.flume.conf.file.AbstractFileConfigurationProvider.doLoad(AbstractFileConfigurationProvider.java:123)
at
org.apache.flume.conf.file.AbstractFileConfigurationProvider.access$300(AbstractFileConfigurationProvider.java:38)
at
org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:202)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
2013-06-06 14:47:18,027 (conf-file-poller-0) [ERROR -
org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:204)]
Failed to load configuration data. Exception follows.
org.apache.flume.FlumeException: Interceptor.Builder not constructable.
at
org.apache.flume.channel.ChannelProcessor.configureInterceptors(ChannelProcessor.java:117)
at
org.apache.flume.channel.ChannelProcessor.configure(ChannelProcessor.java:80)
at
org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at
org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadSources(PropertiesFileConfigurationProvider.java:337)
at
org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.load(PropertiesFileConfigurationProvider.java:222)
at
org.apache.flume.conf.file.AbstractFileConfigurationProvider.doLoad(AbstractFileConfigurationProvider.java:123)
at
org.apache.flume.conf.file.AbstractFileConfigurationProvider.access$300(AbstractFileConfigurationProvider.java:38)
at
org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:202)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.InstantiationException:
org.apache.flume.interceptor.StaticInterceptor
at java.lang.Class.newInstance0(Class.java:359)
at java.lang.Class.newInstance(Class.java:327)
at
org.apache.flume.interceptor.InterceptorBuilderFactory.newInstance(InterceptorBuilderFactory.java:48)
at
org.apache.flume.channel.ChannelProcessor.configureInterceptors(ChannelProcessor.java:109)
... 15 more

Thanks,
Allan

WSDL2Java from java question

$
0
0
Hello
I use CXF 2.6.7 with SOAP Web services. CXF is really fun.
But I have a problem with WSDL2Java.
I have written an Eclipse plugin and I run in it WSDLToJava. When it runs, i
have in the Eclipse console the command 'wsdl2java…' with all my arguments.
If I copy this line and submit it from a shell or bat (use bin/wsdl2java) it
runs runs fine and the classes are generated.
When I run the run method and the wsdl2java class instance : run(new
ToolContext()) (after setting of arguments) I have the following message

The generated command is

wsdl2java -d
/Users/michel/Developpement/Workspaces/runtime-EclipseApplication/TestPlugin/Sources
-classdir /var/folders/rL/rL8ReMwHHuqnFtxKKTIuQE+++TI/-Tmp-/
-p http://www.ws.test.com/TESTSRVV1/=com.test.ws.www
-p http://model.ws.test.com/TESTSRVV1/=com.test.ws.model
-impl
-validate
-exsh false
-dns true
-verbose
-dex true
-defaultValues
-fe jaxws21
-db jaxb
-encoding UTF8
-wsdlLocation /ws/wsdl/TESTSRV1.wsdl
-wv 1.1
file:/Users/michel/Developpement/Workspaces/TESTSRV1_Project/Properties/ws/wsdl/TESTSRV1.wsdl

As I said below, this command runs fine in batch mode.

I have run the eclipse plugin is debug mode I think that the problem is in
WSDL11Validator in method getDefaultSchemas. When it runs fine, then all
schemas included in cfx-2.6.7.jar (schemas.wsdl), via the classpath, are
set in the list. With the plugin, these schemas are not found…

I don't understand why because the plugin has a good manifest declaring the
following classpath:
Bundle-ClassPath (lib/* are valid) : .,
lib/wsdl4j-1.6.2.jar,
lib/commons-collections-3.2.1.jar,
lib/commons-lang-2.6.jar,
lib/commons-logging-1.1.1.jar,
lib/cxf-2.6.7.jar,
lib/cxf-manifest.jar,
lib/cxf-services-sts-core-2.6.7.jar,
lib/cxf-services-wsn-api-2.6.7.jar,
lib/cxf-services-wsn-core-2.6.7.jar,
lib/cxf-xjc-boolean-2.6.1.jar,
lib/cxf-xjc-bug671-2.6.1.jar,
lib/cxf-xjc-dv-2.6.1.jar,
lib/cxf-xjc-runtime-2.6.1.jar,
lib/cxf-xjc-ts-2.6.1.jar,
lib/jaxb-api-2.2.5.jar,
lib/jaxb-impl-2.2.5.1.jar,
lib/jaxb-xjc-2.2.5.1.jar,
lib/neethi-3.0.2.jar,
lib/serializer-2.7.1.jar,
lib/stax2-api-3.1.1.jar,
lib/velocity-1.7.jar,
lib/woodstox-core-asl-4.2.0.jar,
lib/xmlschema-core-2.0.3.jar

An idea ?
Thanks.

Dilemma - ZK consumer woes - upgrade to 0.8?

$
0
0
[ Sorry if this mail is duplicated, this is my fourth try sending this
message]

Hey guys,

I sincerely apologize if this has been covered before, I haven't quite
found a similar situation.

We are using Kafka 0.7.2 in production, and we are using the ZK high level
Scala consumer. However, we find the ZK consumer very unstable. It would
work for one or two weeks, then suddenly it would complain about ZK nodes
disappearing, and one consumer would die, then another, then another, until
our pipeline is no longer pulling any data. There are multiple
NullPointerExceptions, and other problems. We can restart it, but it
does not stay up predictably.

On the other hand, I have a simple app which I wrote using the simple
consumer to mirror select partitions (will blog about this later) and it
just works flawlessly.

So we are faced with a dilemma to get back on track:
1) Use SimpleConsumer, and write our own balancing code (but honestly our
boxes almost never go down, compared to the rate of ZK mishaps)
2) Upgrade to Kafka 0.8 and hope that that resolves the issue.

There seem to be so many improvements in 0.8 that that seems to be the
biggest win long-term, so I am wondering if people can comment on:
- has anyone tried using 0.8 in production? Is it stable yet?
- How much more stable is the ZK consumer in 0.8?
- will it be possible to change the offset in the 0.8 consumer? That was
the other reason why we wanted to move to SimpleConsumer.

thanks,
Evan

Determine at runtime if the application is running on mobile

$
0
0
Hello,

I'd like to know at runtime, if the current application is running in
web mode (in browser) or as mobile app.
Is there some kind of property that I can use?

I have looked at Capabilities class, but I'm not sure it helps. I see
that there is a "playerType" setting, but it doesn't seem to
differentiate AIR from AIR on mobile.

Thanks in advance,
Cristian.

Client timeout settings ignored when runtime endpoint protocol differs from wsdl location

$
0
0
I'm trying to write some code so that the endpoint for my wsdl first soap service can be set at runtime. It mostly works, but the timeouts sometimes aren't honored. I've determined they get ignored when the protocol for the endpoint specified in the wsdl (e.g. "http://localhost/endpoint") differs from the endpoint set at runtime (e.g. "https://remote/endpoint").

Given this wsdl fragment:
<wsdl:service name="MyService">
<wsdl:port binding="[...]" name="[...]">
<soap:address location="http://localhost/endpoint"/>
</wsdl:port>
</wsdl:service>

And this simplified code:

private MyInterface createRemoteService(String endpointUrl, long connectionTimeout, long receiveTimeout)

MyService service = new MyService();
MyInterface provider = service.getMyInterface();

HTTPClientPolicy httpClientPolicy = new HTTPClientPolicy();
httpClientPolicy.setConnectionTimeout(connectionTimeout);
httpClientPolicy.setReceiveTimeout(receiveTimeout);

Client client = ClientProxy.getClient(provider);
HTTPConduit httpConduit = (HTTPConduit)client.getConduit();
httpConduit.setClient(httpClientPolicy);

client.getRequestContext().put(Message.ENDPOINT_ADDRESS, endpointUrl);

return provider;

I find the following:

"wsdl location is http" + "runtime endpoint is http" = timeouts work
"wsdl location is https" + "runtime endpoint is https" = timeouts work
"wsdl location is https" + "runtime endpoint is http" = timeouts ignored and defaults are used
"wsdl location is http" + "runtime endpoint is https" = timeouts ignored and defaults are used

I tried an alternate method of setting the runtime endpoint:

BindingProvider bindingProvider = (BindingProvider)provider;
bindingProvider.getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endpointUrl);

But that didn't make a difference.

My best idea to work around this is to create two versions of the wsdl. One with http and one with https. Then check the runtime endpointUrl protocol and choose the correct one. Then, pass that in to one of the alternate MyService() constructors that takes a wsdl location url.

But, it'd be great if there were some way to just make it work without that. Any idea why it doesn't work now?

Thanks,
-Troy
The information contained in this e-mail and in any attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. This message has been scanned for known computer viruses.

Reader / Writer terminology

$
0
0
I'm curious how the "Reader" and "Writer" terminology came about, and,
most importantly, whether it's as confusing to the rest of you as it is to
me?

As I understand it, the principal analogy here is from the RPC world - a
process A writes some Avro to process B, in which case A is the writer and
B is the reader.

And there is the possibility that the schema which B may be expecting
isn't what A is providing, thus B may have to do some conversion on its
end to grok it, and Avro schema resolution rules may make this possible.

So far so good. This is where it becomes confusing. I am lost on how the
act of reading or writing is relevant to the task at hand, which is
conversion of a value from one schema to another.

As I read stuff on the lists and the docs, I couldn't help noticing words
such as "original", "first", "second", "actual, "expected" being using
alongside "reader" and "writer" as clarification.

Why would be wrong with a "source" and "destination" schmeas?

Consider the following line (from Avro-C):

writer_iface = avro_resolved_writer_new(writer_schema, reader_schema);

Here "writer" in resolved_writer and writer_schema are unrelated. The
former refers to the fact that this interface will be modifying (writing
to) an object, the latter is referring to the writer (source, original,
a.k.a actual) schema.

Wouldn't this read better as:

writer_iface = avro_resolved_writer_new(source_schema, dest_schema);

Anyway - I just want to know if I'm missing something obvious when I think
that reader/writer is confusing.

Thanks,

Grisha

Can i display a WepPage in a PDF (How to get the rendered markup)?

$
0
0
Hi,

i would like to render a WebPage with flying saucer (PDF generator).
I've created a resource reference and a ByteArrayResource.

But now i need the rendered markup of the page (e.g. HomePage).

So my question is how can i render the markup in my Resource?

Thanks for your support
Per

problems with .gz

$
0
0
I'm using pig 0.11.2.

I had been processing ASCII files of json with schema: (key:chararray,
columns:bag {column:tuple (timeUUID:chararray, value:chararray,
timestamp:long)})
For what it's worth, this is cassandra data, at a fairly low level.

But, this was getting big, so I compressed it all with gzip (my "ETL"
process is already chunking the data into 1GB parts, making the .gz files
~100MB).

As a sanity check, I decided to do a quick check of pre/post, and the
numbers aren't matching. Then I've done a lot of messing around trying to
figure out why and I'm getting more and more puzzled.

My "quick check" was to get an overall count. It looked like (assuming A
is a LOAD given the schema above):

allGrp = GROUP A ALL;
aCount = FOREACH allGrp GENERATE group, COUNT(A);
DUMP aCount;

Basically the original data returned a number GREATER than the compressed
data number (not by a lot, but still...).

Then I uncompressed all of the compressed files, and did a size check of
original vs. uncompressed. They were the same. Then I "quick checked" the
uncompressed, and the count of that was == original! So, the way in which
pig processes the gzip'ed data is actually somehow different.

Then I tried to see if there are nulls floating around, so I loaded "orig"
and "comp" and tried to catch the "missing keys" with outer joins:

joined = JOIN orig by key LEFT OUTER, comp BY key;
filtered = FILTER joined BY (comp::key is null);

And filtered was empty! I then tried the reverse (which makes no sense I
know, as this was the smaller set), and filtered is still empty!

All of these loads are through a custom UDF that extends LoadFunc. But,
there isn't much to that UDF (and it's been in use for many months now).
Basically, the "raw" data is JSON (from cassandra's sstable2json program).
And I parse the json and turn it into the pig structure of the schema
noted above.

Does anything make sense here?

Thanks!

will

File Component: Default rename behaviour

$
0
0
We have a scenario where we are consuming a large number of files, using the
file component and passing off the processing to seda routes.
The component is moving the file (preMove) and moving to a completed
directory once done.

The throughput is particularly poor, and think it may be because of the
rename behaviour.

Changing the log level, we see the rename process falling back to use a
copyAndDelete implementation. The standard file rename always fails; the
1000ms sleep time is the reason the throughput is bad.

We're running on Linux, copying between 2 different NFS mount points.

Has anyone seen this in the past? Apart from patching the Camel class, does
anyone have any suggestions for how to overcome this? Would like to avoid
writing a custom component with our own copy implementation.

http://svn.apache.org/viewvc/camel/trunk/camel-core/src/main/java/org/apache/camel/util/FileUtil.java?view=markup

Producer Template Failure on Glassfish AIX

$
0
0
having problems deploying Camel 2.10.2 on Glassfish 3.1.2.2 (build 4) on AIX
with IBM 1.6 JDK - 64 bit - build pap6460sr13fp2-20130424_01(SR13 FP2).

The exception, noted at the bottom, happens when it tries to load the Camel
Context that utilizes a Producer Template. This works perfectly on both
Windows and Ubuntu implementations.

I've looked on IBM's website and there was a issue in the JDK with bean
introspection. I've patched the JDK with SR13 FP2, but still get the same
problem. Here is the link for the supposed fix:
http://www-01.ibm.com/support/docview.wss?uid=swg1IZ90916

Here is my camel context that declares it:

{code}
<camel:camelContext id="ngms-context-filebatch-producer"
xmlns="http://camel.apache.org/schema/spring">

<camel:template id="fbtemplate"
defaultEndpoint="activemq:ngmsBatchProcessing" />
</camel:camelContext>

<bean id="fbprocessor"
class="com.nextgate.ms.component.adapter.filebatch.FileBatchProcessor">
<property name="producer" ref="fbtemplate" />
</bean>

<camel:camelContext id="ngms-context-filebatch-in"
xmlns="http://camel.apache.org/schema/spring" >

<camel:routeContextRef ref="ngms-routecontext-hl7v2" />
<camel:routeContextRef ref="ngms-routecontext-audits" />

<camel:endpoint id="fbendpoint"

uri="file://{{nextgate.ms.batchfile.basedir}}/inbox?delay=5000橪;include=.*\.{{nextgate.ms.batchfile.pollext}}橪;sortBy=file:modified橪;preMove=../inprocess/橪;move=../done/"
/>

<camel:route id="ngms-route-filebatch-in">

<camel:from ref="fbendpoint" />
<camel:process ref="fbprocessor" />

<camel:to uri="mock:result" />
</camel:route>

<camel:route id="ngms-route-filebatch-fbqueue"
routePolicyRef="fbThrottlePolicy">

<camel:from uri="activemq:ngmsBatchProcessing" />

<choice>
<when><simple>${header.ngmsMessageTypeTrigger} contains 'HL7v2'</simple>

<unmarshal ref="ngmsHL7v2" />

<process ref="ngmsPerfTrackReceive" />
<process ref="ngmsJournalProcessor" />

<to uri="direct:ngmsHL7v2Processing" />
</when>
</choice>
</camel:route>

</camel:camelContext>
{code}

Here is my processor implementation:

{code}
public class FileBatchProcessor implements Processor {

org.apache.camel.ProducerTemplate producer;

public void process(Exchange ex) throws Exception {

... do stuff and call sendRecord()

private void sendRecord(Filetype filetype, final String rec,
StringBuffer headerRecs) throws Exception {

if (LOG.isTraceEnabled()) LOG.trace(" + sending record:\r\n{}",
rec.replace("\r", "\r\n"));

final String messageTypeTrigger = getTypeTrigger(filetype);
final String fileHeaderControlId = this.getFileHeaderControlId();
final String filename = this.getFileName();

producer.send(producer.getDefaultEndpoint(), ExchangePattern.InOnly,
new Processor() {
public void process(Exchange outExchange) {
outExchange.getIn().setHeader(NGMSConstants.MESSAGETYPE_TRIGGER,
messageTypeTrigger);
outExchange.getIn().setHeader(NGMSConstants.BATCH_FILEHEADER_CONTROL,
fileHeaderControlId);
outExchange.getIn().setHeader(Exchange.FILE_NAME, filename);
outExchange.getIn().setBody(rec);

});

public void setProducer(org.apache.camel.ProducerTemplate producer) {

this.producer = producer;

public org.apache.camel.ProducerTemplate getProducer() {
return producer;

{code}

Any and all help is greatly appreciated!!!!

receiving the following error when deploying:
{code}
[#|2013-06-05T16:11:17.722-0400|SEVERE|oracle-glassfish3.1.2|org.apache.catalina.core.ContainerBase|_ThreadID=10;_ThreadName=Thread-11;|ContainerBase.addChild:
start:
org.apache.catalina.LifecycleException:
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'fbtemplate': Initialization of bean failed; nested exception is
org.springframework.bea
ns.FatalBeanException: Failed to obtain BeanInfo for class
[org.apache.camel.spring.CamelProducerTemplateFactoryBean]; nested exception
is java.beans.IntrospectionException: Parameter type in getter method does
not
corresponds to predefined.
at
org.apache.catalina.core.StandardContext.start(StandardContext.java:5389)
at com.sun.enterprise.web.WebModule.start(WebModule.java:498)
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:917)
at
org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:901)
at
org.apache.catalina.core.StandardHost.addChild(StandardHost.java:733)
at
com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:2019)
at
com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1669)
at
com.sun.enterprise.web.WebApplication.start(WebApplication.java:109)
at org.glassfish.internal.data.EngineRef.start(EngineRef.java:130)
at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:269)
at
org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:301)
at
com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:461)
at
com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:240)
at
org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:389)
at
com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348)
at
com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363)
at
com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085)
at
com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95)
at
com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291)
at
com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259)
at
org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:214)
at
org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:207)
at
org.glassfish.admin.rest.resources.TemplateListOfResource.createResource(TemplateListOfResource.java:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:600)
at
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at
com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134)
at
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at
com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134)
at
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at
com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer._service(GrizzlyContainer.java:182)
at
com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer.service(GrizzlyContainer.java:147)
at
org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:148)
at
com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179)
at
com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117)
at
com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354)
at
com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195)
at
com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860)
at
com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757)
at
com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056)
at
com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229)
at
com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at
com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at
com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at
com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at
com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at
com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at
com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at
com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:738)
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'fbtemplate': Initialization of bean failed; nested
exception is org.springframework.beans.FatalBeanException: Failed
to obtain BeanInfo for class
[org.apache.camel.spring.CamelProducerTemplateFactoryBean]; nested exception
is java.beans.IntrospectionException: Parameter type in getter method does
not corresponds to predefined.
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
at
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:587)
at
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:925)
at
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:472)
at
org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:383)
at
org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:283)
at
org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)
at
org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:4750)
at
com.sun.enterprise.web.WebModule.contextListenerStart(WebModule.java:550)
at
org.apache.catalina.core.StandardContext.start(StandardContext.java:5366)
... 62 more
Caused by: org.springframework.beans.FatalBeanException: Failed to obtain
BeanInfo for class
[org.apache.camel.spring.CamelProducerTemplateFactoryBean]; nested exception
is java.beans.IntrospectionException: Paramet
er type in getter method does not corresponds to predefined.
at
org.springframework.beans.CachedIntrospectionResults.<init>(CachedIntrospectionResults.java:262)
at
org.springframework.beans.CachedIntrospectionResults.forClass(CachedIntrospectionResults.java:149)
at
org.springframework.beans.BeanWrapperImpl.getCachedIntrospectionResults(BeanWrapperImpl.java:324)
at
org.springframework.beans.BeanWrapperImpl.getPropertyDescriptorInternal(BeanWrapperImpl.java:354)
at
org.springframework.beans.BeanWrapperImpl.isWritableProperty(BeanWrapperImpl.java:430)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1362)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1118)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517)
... 76 more
Caused by: java.beans.IntrospectionException: Parameter type in getter
method does not corresponds to predefined.
at
java.beans.PropertyDescriptor.setReadMethod(PropertyDescriptor.java:140)
at
org.springframework.beans.ExtendedBeanInfo.addOrUpdatePropertyDescriptor(ExtendedBeanInfo.java:305)
at
org.springframework.beans.ExtendedBeanInfo.addOrUpdatePropertyDescriptor(ExtendedBeanInfo.java:202)
at
org.springframework.beans.ExtendedBeanInfo.<init>(ExtendedBeanInfo.java:172)
at
org.springframework.beans.CachedIntrospectionResults.<init>(CachedIntrospectionResults.java:224)
... 83 more
|#]

{code}

How to know what language is on

$
0
0
I am trying to test something by testing in what language is current page with following:
public boolean getEnglishVersionOfDescription() {
if ("en".equalsIgnoreCase(persistentLocale.get().getLanguage())) {
return true;
} else {
return false;

tml
<t:loop source="propositioning" value="proposition">
<t:if test="EnglishVersionOfDescription">
${proposition.getEnglishProposition}
</t:if>

my proposition is pulled from db with following
public List getPropositioning() {
List l;
l = session.createCriteria(Proposition.class).add(Restrictions.eq("article.id", this.getId())).
list();
return l;

However I keep getting Render queue error in BeginRender[Testing:if]: Failure reading
parameter 'test' of component Testing:if:
org.apache.tapestry5.ioc.internal.util.TapestryException and I have no clue what is it complaining about?

For artifact {org.jvnet.staxex:stax-ex:null:jar}: The version cannot be empty.

$
0
0
All,

I am running eclipse:eclipse on a project and getting the

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-eclipse-plugin:2.9:eclipse (default-cli) on project installer: Execution default-cli of goal org.apache.maven.plugins:maven-eclipse-plugin:2.9:eclipse failed: For artifact {org.jvnet.staxex:stax-ex:null:jar}: The version cannot be empty. -> [Help 1]

After some debugging, I notice the following warning

[WARNING] Invalid POM for com.sun.xml.stream.buffer:streambuffer:jar:0.4, transitive dependencies (if any) will not be available, enable debug logging for more details: Some problems were encountered while processing the POMs:
[ERROR] 'dependencies.dependency.version' for org.jvnet.staxex:stax-ex:jar is missing. @ line 7, column 17
[ERROR] 'dependencies.dependency.version' for activation:activation:jar is missing. @ line 11, column 17

I pulled up the pom file for com.sun.xml.stream.buffer:streambuffer:jar:0.4, and it is in-fact missing versions for its dependancies.
I figured I would include a newer version of streambuffer.jar in my pom explicitly (which has correct pom) but for some reason, maven is still choosing the transitive version of the dependency over my explicitly declared one.
Removing it:
com.sun.xml.stream.buffer:streambuffer:jar:0.7:compile (removed - nearer found: 0.4)

This project is a child project of another project.

TWO QUESTIONS:

In what cases will maven choose a transitive dependency version over an explicitly declared one in the pom?
Is there a way to make maven less sensitive - to missing versions in poms?

I am currently running:
Apache Maven 3.0.5 (r01de14724cdef164cd33c7c8c2fe155faf9602da; 2013-02-19 13:51:28+0000)
Viewing all 5648 articles
Browse latest View live




Latest Images