org.apache.hadoop.hive.ql.io.parquet.read
Class DataWritableReadSupport
java.lang.Object
parquet.hadoop.api.ReadSupport<org.apache.hadoop.io.ArrayWritable>
org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport
public class DataWritableReadSupport
- extends parquet.hadoop.api.ReadSupport<org.apache.hadoop.io.ArrayWritable>
A MapWritableReadSupport
Manages the translation between Hive and Parquet
| Nested classes/interfaces inherited from class parquet.hadoop.api.ReadSupport |
parquet.hadoop.api.ReadSupport.ReadContext |
| Fields inherited from class parquet.hadoop.api.ReadSupport |
PARQUET_READ_SCHEMA |
|
Method Summary |
parquet.hadoop.api.ReadSupport.ReadContext |
init(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
parquet.schema.MessageType fileSchema)
It creates the readContext for Parquet side with the requested schema during the init phase. |
parquet.io.api.RecordMaterializer<org.apache.hadoop.io.ArrayWritable> |
prepareForRead(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
parquet.schema.MessageType fileSchema,
parquet.hadoop.api.ReadSupport.ReadContext readContext)
It creates the hive read support to interpret data from parquet to hive |
| Methods inherited from class parquet.hadoop.api.ReadSupport |
getSchemaForRead, getSchemaForRead, init |
| Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
HIVE_SCHEMA_KEY
public static final String HIVE_SCHEMA_KEY
- See Also:
- Constant Field Values
DataWritableReadSupport
public DataWritableReadSupport()
init
public parquet.hadoop.api.ReadSupport.ReadContext init(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
parquet.schema.MessageType fileSchema)
- It creates the readContext for Parquet side with the requested schema during the init phase.
- Overrides:
init in class parquet.hadoop.api.ReadSupport<org.apache.hadoop.io.ArrayWritable>
- Parameters:
configuration - needed to get the wanted columnskeyValueMetaData - // unusedfileSchema - parquet file schema
- Returns:
- the parquet ReadContext
prepareForRead
public parquet.io.api.RecordMaterializer<org.apache.hadoop.io.ArrayWritable> prepareForRead(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
parquet.schema.MessageType fileSchema,
parquet.hadoop.api.ReadSupport.ReadContext readContext)
- It creates the hive read support to interpret data from parquet to hive
- Specified by:
prepareForRead in class parquet.hadoop.api.ReadSupport<org.apache.hadoop.io.ArrayWritable>
- Parameters:
configuration - // unusedkeyValueMetaData - fileSchema - // unusedreadContext - containing the requested schema and the schema of the hive table
- Returns:
- Record Materialize for Hive
Copyright © 2014 The Apache Software Foundation. All rights reserved.