|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||
| Packages that use Table | |
|---|---|
| org.apache.hadoop.hive.ql.exec | Hive QL execution tasks, operators, functions and other handlers. |
| org.apache.hadoop.hive.ql.hooks | |
| org.apache.hadoop.hive.ql.index | |
| org.apache.hadoop.hive.ql.lockmgr | Hive Lock Manager interfaces and some custom implmentations |
| org.apache.hadoop.hive.ql.metadata | |
| org.apache.hadoop.hive.ql.metadata.formatting | |
| org.apache.hadoop.hive.ql.optimizer | |
| org.apache.hadoop.hive.ql.optimizer.metainfo.annotation | |
| org.apache.hadoop.hive.ql.optimizer.physical.index | |
| org.apache.hadoop.hive.ql.optimizer.ppr | |
| org.apache.hadoop.hive.ql.parse | |
| org.apache.hadoop.hive.ql.plan | |
| org.apache.hadoop.hive.ql.security.authorization | |
| org.apache.hadoop.hive.ql.stats | |
| Uses of Table in org.apache.hadoop.hive.ql.exec |
|---|
| Methods in org.apache.hadoop.hive.ql.exec with parameters of type Table | |
|---|---|
static String |
ArchiveUtils.conflictingArchiveNameOrNull(Hive db,
Table tbl,
LinkedHashMap<String,String> partSpec)
Determines if one can insert into partition(s), or there's a conflict with archive. |
static ArchiveUtils.PartSpecInfo |
ArchiveUtils.PartSpecInfo.create(Table tbl,
Map<String,String> partSpec)
Extract partial prefix specification from table and key-value map |
org.apache.hadoop.fs.Path |
ArchiveUtils.PartSpecInfo.createPath(Table tbl)
Creates path where partitions matching prefix should lie in filesystem |
static TableDesc |
Utilities.getTableDesc(Table tbl)
|
| Uses of Table in org.apache.hadoop.hive.ql.hooks |
|---|
| Methods in org.apache.hadoop.hive.ql.hooks that return Table | |
|---|---|
Table |
Entity.getT()
|
Table |
Entity.getTable()
Get the table associated with the entity. |
| Methods in org.apache.hadoop.hive.ql.hooks with parameters of type Table | |
|---|---|
void |
Entity.setT(Table t)
|
| Constructors in org.apache.hadoop.hive.ql.hooks with parameters of type Table | |
|---|---|
Entity(Table t,
boolean complete)
Constructor for a table. |
|
ReadEntity(Table t)
Constructor. |
|
ReadEntity(Table t,
ReadEntity parent)
|
|
ReadEntity(Table t,
ReadEntity parent,
boolean isDirect)
|
|
WriteEntity(Table t,
WriteEntity.WriteType type)
Constructor for a table. |
|
WriteEntity(Table t,
WriteEntity.WriteType type,
boolean complete)
|
|
| Uses of Table in org.apache.hadoop.hive.ql.index |
|---|
| Methods in org.apache.hadoop.hive.ql.index with parameters of type Table | |
|---|---|
List<Task<?>> |
TableBasedIndexHandler.generateIndexBuildTaskList(Table baseTbl,
Index index,
List<Partition> indexTblPartitions,
List<Partition> baseTblPartitions,
Table indexTbl,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs)
|
List<Task<?>> |
HiveIndexHandler.generateIndexBuildTaskList(Table baseTbl,
Index index,
List<Partition> indexTblPartitions,
List<Partition> baseTblPartitions,
Table indexTbl,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs)
Requests that the handler generate a plan for building the index; the plan should read the base table and write out the index representation. |
| Uses of Table in org.apache.hadoop.hive.ql.lockmgr |
|---|
| Constructors in org.apache.hadoop.hive.ql.lockmgr with parameters of type Table | |
|---|---|
HiveLockObject(Table tbl,
HiveLockObject.HiveLockObjectData lockData)
|
|
| Uses of Table in org.apache.hadoop.hive.ql.metadata |
|---|
| Methods in org.apache.hadoop.hive.ql.metadata that return Table | |
|---|---|
Table |
Table.copy()
|
Table |
Partition.getTable()
|
Table |
Hive.getTable(String tableName)
Returns metadata for the table named tableName |
Table |
Hive.getTable(String tableName,
boolean throwException)
Returns metadata for the table named tableName |
Table |
Hive.getTable(String dbName,
String tableName)
Returns metadata of the table |
Table |
Hive.getTable(String dbName,
String tableName,
boolean throwException)
Returns metadata of the table |
Table |
Hive.newTable(String tableName)
|
| Methods in org.apache.hadoop.hive.ql.metadata with parameters of type Table | |
|---|---|
void |
Hive.alterTable(String tblName,
Table newTbl)
Updates the existing table metadata with the new metadata. |
static StorageDescriptor |
Partition.cloneSd(Table tbl)
We already have methods that clone stuff using XML or Kryo. |
static Partition |
Partition.createMetaPartitionObject(Table tbl,
Map<String,String> partSpec,
org.apache.hadoop.fs.Path location)
|
Partition |
Hive.createPartition(Table tbl,
Map<String,String> partSpec)
Creates a partition. |
void |
Hive.createTable(Table tbl)
Creates the table with the give objects |
void |
Hive.createTable(Table tbl,
boolean ifNotExists)
Creates the table with the give objects |
Set<Partition> |
Hive.getAllPartitionsOf(Table tbl)
Get all the partitions; unlike Hive.getPartitions(Table), does not include auth. |
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate)
|
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate,
String partPath,
boolean inheritTableSpecs)
Returns partition metadata |
List<Partition> |
Hive.getPartitions(Table tbl)
get all the partitions that the table has |
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial specification. |
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec,
short limit)
get all the partitions of the table that matches the given partial specification. |
boolean |
Hive.getPartitionsByExpr(Table tbl,
ExprNodeGenericFuncDesc expr,
HiveConf conf,
List<Partition> result)
Get a list of Partitions by expr. |
List<Partition> |
Hive.getPartitionsByFilter(Table tbl,
String filter)
Get a list of Partitions by filter. |
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
List<String> partNames)
Get all partitions of the table that matches the list of given partition names. |
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial specification. |
protected void |
Partition.initialize(Table table,
Partition tPartition)
Initializes this object with the given variables |
void |
Hive.renamePartition(Table tbl,
Map<String,String> oldPartSpec,
Partition newPart)
Rename a old partition to new partition |
void |
Partition.setTable(Table table)
Should be only used by serialization. |
| Constructors in org.apache.hadoop.hive.ql.metadata with parameters of type Table | |
|---|---|
DummyPartition(Table tbl,
String name)
|
|
DummyPartition(Table tbl,
String name,
Map<String,String> partSpec)
|
|
Partition(Table tbl)
create an empty partition. |
|
Partition(Table tbl,
Map<String,String> partSpec,
org.apache.hadoop.fs.Path location)
Create partition object with the given info. |
|
Partition(Table tbl,
Partition tp)
|
|
| Uses of Table in org.apache.hadoop.hive.ql.metadata.formatting |
|---|
| Methods in org.apache.hadoop.hive.ql.metadata.formatting with parameters of type Table | |
|---|---|
void |
MetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt,
boolean isPretty,
boolean isOutputPadded)
Describe table. |
void |
JsonMetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt,
boolean isPretty,
boolean isOutputPadded)
Describe table. |
static String |
MetaDataFormatUtils.getTableInformation(Table table)
|
| Method parameters in org.apache.hadoop.hive.ql.metadata.formatting with type arguments of type Table | |
|---|---|
void |
MetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
Show the table status. |
void |
JsonMetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
|
| Uses of Table in org.apache.hadoop.hive.ql.optimizer |
|---|
| Methods in org.apache.hadoop.hive.ql.optimizer with parameters of type Table | |
|---|---|
static List<Index> |
IndexUtils.getIndexes(Table baseTableMetaData,
List<String> matchIndexTypes)
Get a list of indexes on a table that match given types. |
protected long |
SizeBasedBigTableSelectorForAutoSMJ.getSize(HiveConf conf,
Table table)
|
| Method parameters in org.apache.hadoop.hive.ql.optimizer with type arguments of type Table | |
|---|---|
static Set<Partition> |
IndexUtils.checkPartitionsCoveredByIndex(TableScanOperator tableScan,
ParseContext pctx,
Map<Table,List<Index>> indexes)
Check the partitions used by the table scan to make sure they also exist in the index table. |
| Uses of Table in org.apache.hadoop.hive.ql.optimizer.metainfo.annotation |
|---|
| Methods in org.apache.hadoop.hive.ql.optimizer.metainfo.annotation with parameters of type Table | |
|---|---|
boolean |
OpTraitsRulesProcFactory.TableScanRule.checkBucketedTable(Table tbl,
ParseContext pGraphContext,
PrunedPartitionList prunedParts)
|
| Uses of Table in org.apache.hadoop.hive.ql.optimizer.physical.index |
|---|
| Constructor parameters in org.apache.hadoop.hive.ql.optimizer.physical.index with type arguments of type Table | |
|---|---|
IndexWhereProcessor(Map<Table,List<Index>> indexes)
|
|
| Uses of Table in org.apache.hadoop.hive.ql.optimizer.ppr |
|---|
| Methods in org.apache.hadoop.hive.ql.optimizer.ppr with parameters of type Table | |
|---|---|
static boolean |
PartitionPruner.onlyContainsPartnCols(Table tab,
ExprNodeDesc expr)
Find out whether the condition only contains partitioned columns. |
| Uses of Table in org.apache.hadoop.hive.ql.parse |
|---|
| Fields in org.apache.hadoop.hive.ql.parse declared as Table | |
|---|---|
Table |
BaseSemanticAnalyzer.tableSpec.tableHandle
|
| Methods in org.apache.hadoop.hive.ql.parse that return Table | |
|---|---|
Table |
QBMetaData.getDestTableForAlias(String alias)
|
Table |
PrunedPartitionList.getSourceTable()
|
Table |
QBMetaData.getSrcForAlias(String alias)
|
protected Table |
BaseSemanticAnalyzer.getTable(String tblName)
|
protected Table |
BaseSemanticAnalyzer.getTable(String tblName,
boolean throwException)
|
protected Table |
BaseSemanticAnalyzer.getTable(String database,
String tblName,
boolean throwException)
|
Table |
QBMetaData.getTableForAlias(String alias)
|
protected Table |
BaseSemanticAnalyzer.getTableWithQN(String qnName,
boolean throwException)
|
| Methods in org.apache.hadoop.hive.ql.parse that return types with arguments of type Table | |
|---|---|
HashMap<String,Table> |
QBMetaData.getAliasToTable()
|
Map<FileSinkOperator,Table> |
ParseContext.getFsopToTable()
|
HashMap<TableScanOperator,Table> |
ParseContext.getTopToTable()
|
| Methods in org.apache.hadoop.hive.ql.parse with parameters of type Table | |
|---|---|
static void |
EximUtil.createExportDump(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath,
Table tableHandle,
List<Partition> partitions)
|
protected Partition |
BaseSemanticAnalyzer.getPartition(Table table,
Map<String,String> partSpec,
boolean throwException)
|
protected List<Partition> |
BaseSemanticAnalyzer.getPartitions(Table table,
Map<String,String> partSpec,
boolean throwException)
|
boolean |
BaseSemanticAnalyzer.isValidPrefixSpec(Table tTable,
Map<String,String> spec)
Checks if given specification is proper specification for prefix of partition cols, for table partitioned by ds, hr, min valid ones are (ds='2008-04-08'), (ds='2008-04-08', hr='12'), (ds='2008-04-08', hr='12', min='30') invalid one is for example (ds='2008-04-08', min='30') |
void |
QBMetaData.setDestForAlias(String alias,
Table tab)
|
void |
QBMetaData.setSrcForAlias(String alias,
Table tab)
|
static void |
BaseSemanticAnalyzer.validatePartSpec(Table tbl,
Map<String,String> partSpec,
ASTNode astNode,
HiveConf conf,
boolean shouldBeFull)
|
| Method parameters in org.apache.hadoop.hive.ql.parse with type arguments of type Table | |
|---|---|
void |
ParseContext.setFsopToTable(Map<FileSinkOperator,Table> fsopToTable)
|
void |
ParseContext.setTopToTable(HashMap<TableScanOperator,Table> topToTable)
|
| Constructors in org.apache.hadoop.hive.ql.parse with parameters of type Table | |
|---|---|
PrunedPartitionList(Table source,
Set<Partition> partitions,
boolean hasUnknowns)
|
|
| Uses of Table in org.apache.hadoop.hive.ql.plan |
|---|
| Methods in org.apache.hadoop.hive.ql.plan that return Table | |
|---|---|
Table |
AlterTableExchangePartition.getDestinationTable()
|
Table |
AlterTableExchangePartition.getSourceTable()
|
Table |
AlterTableDesc.getTable()
|
| Methods in org.apache.hadoop.hive.ql.plan with parameters of type Table | |
|---|---|
void |
AlterTableExchangePartition.setDestinationTable(Table destinationTable)
|
void |
AlterTableExchangePartition.setSourceTable(Table sourceTable)
|
void |
AlterTableDesc.setTable(Table table)
|
| Constructors in org.apache.hadoop.hive.ql.plan with parameters of type Table | |
|---|---|
AlterTableExchangePartition(Table sourceTable,
Table destinationTable,
Map<String,String> partitionSpecs)
|
|
DynamicPartitionCtx(Table tbl,
Map<String,String> partSpec,
String defaultPartName,
int maxParts)
|
|
| Uses of Table in org.apache.hadoop.hive.ql.security.authorization |
|---|
| Subclasses of Table in org.apache.hadoop.hive.ql.security.authorization | |
|---|---|
static class |
AuthorizationPreEventListener.TableWrapper
|
| Methods in org.apache.hadoop.hive.ql.security.authorization with parameters of type Table | |
|---|---|
void |
StorageBasedAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
HiveAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a list of columns. |
void |
BitSetCheckedAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv)
|
void |
StorageBasedAuthorizationProvider.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
HiveAuthorizationProvider.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a hive table object. |
void |
BitSetCheckedAuthorizationProvider.authorize(Table table,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv)
|
| Constructors in org.apache.hadoop.hive.ql.security.authorization with parameters of type Table | |
|---|---|
AuthorizationPreEventListener.PartitionWrapper(Table table,
Partition mapiPart)
|
|
| Uses of Table in org.apache.hadoop.hive.ql.stats |
|---|
| Methods in org.apache.hadoop.hive.ql.stats with parameters of type Table | |
|---|---|
static Statistics |
StatsUtils.collectStatistics(HiveConf conf,
PrunedPartitionList partList,
Table table,
TableScanOperator tableScanOperator)
Collect table, partition and column level statistics |
static List<Long> |
StatsUtils.getBasicStatForPartitions(Table table,
List<Partition> parts,
String statType)
Get basic stats of partitions |
static long |
StatsUtils.getBasicStatForTable(Table table,
String statType)
Get basic stats of table |
static long |
StatsUtils.getFileSizeForTable(HiveConf conf,
Table table)
Find the bytes on disk occupied by a table |
static long |
StatsUtils.getNumRows(Table table)
Get number of rows of a give table |
static Map<String,List<ColStatistics>> |
StatsUtils.getPartColumnStats(Table table,
List<ColumnInfo> schema,
List<String> partNames,
List<String> neededColumns)
Get table level column statistics from metastore for needed columns |
static long |
StatsUtils.getRawDataSize(Table table)
Get raw data size of a give table |
static List<ColStatistics> |
StatsUtils.getTableColumnStats(Table table,
List<ColumnInfo> schema,
List<String> neededColumns)
Get table level column statistics from metastore for needed columns |
static long |
StatsUtils.getTotalSize(Table table)
Get total size of a give table |
|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||