Create an index on a table.
Create an index on a table.
Index Identifier which goes in the catalog
Table identifier on which the index is created.
Columns on which the index has to be created with the direction of sorting. Direction can be specified as None.
Options for indexes. For e.g. column table index - ("COLOCATE_WITH"->"CUSTOMER"). row table index - ("INDEX_TYPE"->"GLOBAL HASH") or ("INDEX_TYPE"->"UNIQUE")
Delete a set of row matching given criteria.
Delete a set of row matching given criteria.
SQL WHERE criteria to select rows that will be deleted
number of rows deleted
Destroy and cleanup this relation.
Destroy and cleanup this relation. It may include, but not limited to, dropping the external table that this relation represents.
Drops an index on this table
Drops an index on this table
Index identifier
Table identifier
Drop if exists
Execute a DML SQL and return the number of rows affected.
Execute a DML SQL and return the number of rows affected.
Insert a sequence of rows into the table represented by this relation.
Insert a sequence of rows into the table represented by this relation.
the rows to be inserted
number of rows inserted
We need to set num partitions just to cheat Exchange of Spark.
We need to set num partitions just to cheat Exchange of Spark. This partition is not used for actual scan operator which depends on the actual RDD. Spark ClusteredDistribution is pretty simplistic to consider numShufflePartitions for its partitioning scheme as Spark always uses shuffle. Ideally it should consider child Spark plans partitioner.
If the row is already present, it gets updated otherwise it gets inserted into the table represented by this relation
If the row is already present, it gets updated otherwise it gets inserted into the table represented by this relation
the rows to be upserted
number of rows upserted
If the row is already present, it gets updated otherwise it gets inserted into the table represented by this relation
If the row is already present, it gets updated otherwise it gets inserted into the table represented by this relation
the DataFrame to be upserted
number of rows upserted
Truncate the table represented by this relation.
Truncate the table represented by this relation.
Update a set of rows matching given criteria.
Update a set of rows matching given criteria.
SQL WHERE criteria to select rows that will be updated
updated values for the columns being changed;
must match updateColumns
the columns to be updated; must match updatedColumns
number of rows affected
A LogicalPlan implementation for an Snappy row table whose contents are retrieved using a JDBC URL or DataSource.