org.apache.hadoop.hive.ql.io.parquet.read
Class ParquetRecordReaderWrapper
java.lang.Object
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper
- All Implemented Interfaces:
- org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
public class ParquetRecordReaderWrapper
- extends Object
- implements org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
|
Field Summary |
static org.apache.commons.logging.Log |
LOG
|
|
Constructor Summary |
ParquetRecordReaderWrapper(parquet.hadoop.ParquetInputFormat<org.apache.hadoop.io.ArrayWritable> newInputFormat,
org.apache.hadoop.mapred.InputSplit oldSplit,
org.apache.hadoop.mapred.JobConf oldJobConf,
org.apache.hadoop.mapred.Reporter reporter)
|
ParquetRecordReaderWrapper(parquet.hadoop.ParquetInputFormat<org.apache.hadoop.io.ArrayWritable> newInputFormat,
org.apache.hadoop.mapred.InputSplit oldSplit,
org.apache.hadoop.mapred.JobConf oldJobConf,
org.apache.hadoop.mapred.Reporter reporter,
ProjectionPusher pusher)
|
|
Method Summary |
void |
close()
|
Void |
createKey()
|
org.apache.hadoop.io.ArrayWritable |
createValue()
|
long |
getPos()
|
float |
getProgress()
|
protected parquet.hadoop.ParquetInputSplit |
getSplit(org.apache.hadoop.mapred.InputSplit oldSplit,
org.apache.hadoop.mapred.JobConf conf)
gets a ParquetInputSplit corresponding to a split given by Hive |
boolean |
next(Void key,
org.apache.hadoop.io.ArrayWritable value)
|
| Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
LOG
public static final org.apache.commons.logging.Log LOG
ParquetRecordReaderWrapper
public ParquetRecordReaderWrapper(parquet.hadoop.ParquetInputFormat<org.apache.hadoop.io.ArrayWritable> newInputFormat,
org.apache.hadoop.mapred.InputSplit oldSplit,
org.apache.hadoop.mapred.JobConf oldJobConf,
org.apache.hadoop.mapred.Reporter reporter)
throws IOException,
InterruptedException
- Throws:
IOException
InterruptedException
ParquetRecordReaderWrapper
public ParquetRecordReaderWrapper(parquet.hadoop.ParquetInputFormat<org.apache.hadoop.io.ArrayWritable> newInputFormat,
org.apache.hadoop.mapred.InputSplit oldSplit,
org.apache.hadoop.mapred.JobConf oldJobConf,
org.apache.hadoop.mapred.Reporter reporter,
ProjectionPusher pusher)
throws IOException,
InterruptedException
- Throws:
IOException
InterruptedException
close
public void close()
throws IOException
- Specified by:
close in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
- Throws:
IOException
createKey
public Void createKey()
- Specified by:
createKey in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
createValue
public org.apache.hadoop.io.ArrayWritable createValue()
- Specified by:
createValue in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
getPos
public long getPos()
throws IOException
- Specified by:
getPos in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
- Throws:
IOException
getProgress
public float getProgress()
throws IOException
- Specified by:
getProgress in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
- Throws:
IOException
next
public boolean next(Void key,
org.apache.hadoop.io.ArrayWritable value)
throws IOException
- Specified by:
next in interface org.apache.hadoop.mapred.RecordReader<Void,org.apache.hadoop.io.ArrayWritable>
- Throws:
IOException
getSplit
protected parquet.hadoop.ParquetInputSplit getSplit(org.apache.hadoop.mapred.InputSplit oldSplit,
org.apache.hadoop.mapred.JobConf conf)
throws IOException
- gets a ParquetInputSplit corresponding to a split given by Hive
- Parameters:
oldSplit - The split given by Hiveconf - The JobConf of the Hive job
- Returns:
- a ParquetInputSplit corresponding to the oldSplit
- Throws:
IOException - if the config cannot be enhanced or if the footer cannot be read from the file
Copyright © 2014 The Apache Software Foundation. All rights reserved.