I’m running Hadoop 1.3.2 and Pig 0.11.1.1.3.2.0-110 and I’m getting this
exception. I’ve tried running Pig 0.12.1, which I downloaded directly from
apache, and I’m getting the same error.
Any help on this would be appreciated.
Backend error message
java.lang.ArrayIndexOutOfBoundsException: 2
at
org.apache.pig.piggybank.storage.HadoopJobHistoryLoader$HadoopJobHistoryReader.nextKeyValue(HadoopJobHistoryLoader.java:184)
at
org.apache.pig.piggybank.storage.HadoopJobHistoryLoader.getNext(HadoopJobHistoryLoader.java:81)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:530)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:363)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
The script is below:
REGISTER /usr/lib/pig/piggybank.jar;
a = LOAD '/mapred/history/done'
USING org.apache.pig.piggybank.storage.HadoopJobHistoryLoader()
AS (j:map[], m:map[], r:map[]);
b = GROUP a by j#'JOBNAME' PARALLEL 5;
STORE b into '/user/maprd/processed';
exception. I’ve tried running Pig 0.12.1, which I downloaded directly from
apache, and I’m getting the same error.
Any help on this would be appreciated.
Backend error message
java.lang.ArrayIndexOutOfBoundsException: 2
at
org.apache.pig.piggybank.storage.HadoopJobHistoryLoader$HadoopJobHistoryReader.nextKeyValue(HadoopJobHistoryLoader.java:184)
at
org.apache.pig.piggybank.storage.HadoopJobHistoryLoader.getNext(HadoopJobHistoryLoader.java:81)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:530)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:363)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
The script is below:
REGISTER /usr/lib/pig/piggybank.jar;
a = LOAD '/mapred/history/done'
USING org.apache.pig.piggybank.storage.HadoopJobHistoryLoader()
AS (j:map[], m:map[], r:map[]);
b = GROUP a by j#'JOBNAME' PARALLEL 5;
STORE b into '/user/maprd/processed';