org.apache.spark.sql

SnappyContext

class SnappyContext extends SQLContext with Serializable with Logging

Main entry point for SnappyData extensions to Spark. A SnappyContext extends Spark's org.apache.spark.sql.SQLContext to work with Row and Column tables. Any DataFrame can be managed as SnappyData tables and any table can be accessed as a DataFrame. This is similar to HiveContext - integrates the SQLContext functionality with the Snappy store.

When running in the embedded mode (i.e. Spark executor collocated with Snappy data store), Applications typically submit Jobs to the Snappy-JobServer (provide link) and do not explicitly create a SnappyContext. A single shared context managed by SnappyData makes it possible to re-use Executors across client connections or applications.

SnappyContext uses a HiveMetaStore for catalog , which is persistent. This enables table metadata info recreated on driver restart.

User should use obtain reference to a SnappyContext instance as below val snc: SnappyContext = SnappyContext.getOrCreate(sparkContext)

Self Type
SnappyContext
To do

Provide links to above descriptions

,

document describing the Job server API

See also

https://github.com/SnappyDataInc/snappydata#interacting-with-snappydata

https://github.com/SnappyDataInc/snappydata#step-1---start-the-snappydata-cluster

Linear Supertypes
SQLContext, Serializable, Serializable, Logging, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. SnappyContext
  2. SQLContext
  3. Serializable
  4. Serializable
  5. Logging
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SnappyContext(sc: SparkContext)

    Attributes
    protected[org.apache.spark]
  2. new SnappyContext(sparkContext: SparkContext, listener: SQLListener, isRootContext: Boolean)

    Attributes
    protected[org.apache.spark]

Type Members

  1. class QueryExecution extends execution.QueryExecution

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.6.0) use org.apache.spark.sql.QueryExecution

  2. class SparkPlanner extends execution.SparkPlanner

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.6.0) use org.apache.spark.sql.SparkPlanner

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def addJar(path: String): Unit

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  7. lazy val analyzer: Analyzer

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  8. def appendToTempTableCache(df: DataFrame, table: String, storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK): Unit

    Append dataframe to cache table in Spark.

    Append dataframe to cache table in Spark.

    df
    table
    storageLevel

    default storage level is MEMORY_AND_DISK

    returns

    @todo -> return type?

    Annotations
    @DeveloperApi()
  9. def applySchemaToPythonRDD(rdd: RDD[Array[Any]], schema: StructType): DataFrame

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  10. def applySchemaToPythonRDD(rdd: RDD[Array[Any]], schemaString: String): DataFrame

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  11. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  12. def baseRelationToDataFrame(baseRelation: BaseRelation): DataFrame

    Definition Classes
    SQLContext
  13. val cacheManager: execution.CacheManager

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  14. def cacheTable(tableName: String): Unit

    Definition Classes
    SQLContext
  15. lazy val catalog: SnappyStoreHiveCatalog

    Definition Classes
    SnappyContext → SQLContext
  16. def clearCache(): Unit

    Definition Classes
    SQLContext
  17. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  18. lazy val conf: SQLConf

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  19. def createApproxTSTopK(topKName: String, keyColumnName: String, topkOptions: Map[String, String], allowExisting: Boolean): DataFrame

    Create approximate structure to query top-K with time series support.

    Create approximate structure to query top-K with time series support. Java friendly api.

    topKName

    the qualified name of the top-K structure

    keyColumnName
    topkOptions
    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    To do

    provide lot more details and examples to explain creating and using TopK with time series

  20. def createApproxTSTopK(topKName: String, keyColumnName: String, topkOptions: Map[String, String], allowExisting: Boolean): DataFrame

    Create approximate structure to query top-K with time series support.

    Create approximate structure to query top-K with time series support.

    topKName

    the qualified name of the top-K structure

    keyColumnName
    topkOptions
    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    To do

    provide lot more details and examples to explain creating and using TopK with time series

  21. def createApproxTSTopK(topKName: String, keyColumnName: String, inputDataSchema: StructType, topkOptions: Map[String, String], allowExisting: Boolean): DataFrame

    Create approximate structure to query top-K with time series support.

    Create approximate structure to query top-K with time series support. Java friendly api.

    topKName

    the qualified name of the top-K structure

    keyColumnName
    inputDataSchema
    topkOptions
    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    To do

    provide lot more details and examples to explain creating and using TopK with time series

  22. def createApproxTSTopK(topKName: String, keyColumnName: String, inputDataSchema: StructType, topkOptions: Map[String, String], allowExisting: Boolean = false): DataFrame

    Create approximate structure to query top-K with time series support.

    Create approximate structure to query top-K with time series support.

    topKName

    the qualified name of the top-K structure

    keyColumnName
    inputDataSchema
    topkOptions
    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    To do

    provide lot more details and examples to explain creating and using TopK with time series

  23. def createDataFrame(data: List[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
  24. def createDataFrame(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
  25. def createDataFrame(rdd: RDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
  26. def createDataFrame(rows: List[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  27. def createDataFrame(rowRDD: JavaRDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  28. def createDataFrame(rowRDD: RDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @DeveloperApi()
  29. def createDataFrame[A <: Product](data: Seq[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  30. def createDataFrame[A <: Product](rdd: RDD[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  31. def createDataset[T](data: List[T])(implicit arg0: Encoder[T]): Dataset[T]

    Definition Classes
    SQLContext
  32. def createDataset[T](data: RDD[T])(implicit arg0: Encoder[T]): Dataset[T]

    Definition Classes
    SQLContext
  33. def createDataset[T](data: Seq[T])(implicit arg0: Encoder[T]): Dataset[T]

    Definition Classes
    SQLContext
  34. def createExternalTable(tableName: String, source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  35. def createExternalTable(tableName: String, source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  36. def createExternalTable(tableName: String, source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  37. def createExternalTable(tableName: String, source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  38. def createExternalTable(tableName: String, path: String, source: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  39. def createExternalTable(tableName: String, path: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  40. def createIndex(indexName: String, baseTable: String, indexColumns: Map[String, Option[SortDirection]], options: Map[String, String]): Unit

    Create an index on a table.

    Create an index on a table.

    indexName

    Index name which goes in the catalog

    baseTable

    Fully qualified name of table on which the index is created.

    indexColumns

    Columns on which the index has to be created with the direction of sorting. Direction can be specified as None.

    options

    Options for indexes. For e.g. column table index - ("COLOCATE_WITH"->"CUSTOMER"). row table index - ("INDEX_TYPE"->"GLOBAL HASH") or ("INDEX_TYPE"->"UNIQUE")

  41. def createIndex(indexName: String, baseTable: String, indexColumns: Map[String, Option[SortDirection]], options: Map[String, String]): Unit

    Create an index on a table.

    Create an index on a table.

    indexName

    Index name which goes in the catalog

    baseTable

    Fully qualified name of table on which the index is created.

    indexColumns

    Columns on which the index has to be created

    options

    Options for indexes. For e.g. column table index - ("COLOCATE_WITH"->"CUSTOMER"). row table index - ("INDEX_TYPE"->"GLOBAL HASH") or ("INDEX_TYPE"->"UNIQUE")

  42. def createSampleTable(tableName: String, schema: StructType, samplingOptions: Map[String, String], allowExisting: Boolean): DataFrame

    Create a stratified sample table.

    Create a stratified sample table. Java friendly version.

    tableName

    the qualified name of the table

    schema

    schema of the table

    samplingOptions

    sampling options like QCS, reservoir size etc.

    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    To do

    provide lot more details and examples to explain creating and using sample tables with time series and otherwise

  43. def createSampleTable(tableName: String, schema: StructType, samplingOptions: Map[String, String], allowExisting: Boolean = false): DataFrame

    Create a stratified sample table.

    Create a stratified sample table.

    tableName

    the qualified name of the table

    schema

    schema of the table

    samplingOptions

    sampling options like QCS, reservoir size etc.

    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    To do

    provide lot more details and examples to explain creating and using sample tables with time series and otherwise

  44. def createSampleTable(tableName: String, samplingOptions: Map[String, String], allowExisting: Boolean): DataFrame

    Create a stratified sample table.

    Create a stratified sample table. Java friendly version.

    tableName

    the qualified name of the table

    samplingOptions

    sampling options like QCS, reservoir size etc.

    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    To do

    provide lot more details and examples to explain creating and using sample tables with time series and otherwise

  45. def createSampleTable(tableName: String, samplingOptions: Map[String, String], allowExisting: Boolean): DataFrame

    Create a stratified sample table.

    Create a stratified sample table.

    tableName

    the qualified name of the table

    samplingOptions

    sampling options like QCS, reservoir size etc.

    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    To do

    provide lot more details and examples to explain creating and using sample tables with time series and otherwise

  46. def createTable(tableName: String, provider: String, schemaDDL: String, options: Map[String, String], allowExisting: Boolean): DataFrame

    Creates a Snappy managed JDBC table which takes a free format ddl string.

    Creates a Snappy managed JDBC table which takes a free format ddl string. The ddl string should adhere to syntax of underlying JDBC store. SnappyData ships with inbuilt JDBC store , which can be accessed by Row format data store. The option parameter can take connection details. Unlike SqlContext.createExternalTable this API creates a persistent catalog entry.

    val props = Map(
    "url" -> s"jdbc:derby:$path",
    "driver" -> "org.apache.derby.jdbc.EmbeddedDriver",
    "poolImpl" -> "tomcat",
    "user" -> "app",
    "password" -> "app"
    )
    
    
    val schemaDDL = "(OrderId INT NOT NULL PRIMARY KEY,ItemId INT, ITEMREF INT)"
    snappyContext.createTable("jdbcTable", "jdbc", schemaDDL, props)
    
    Any DataFrame of the same schema can be inserted into the JDBC table using
    DataFrameWriter Api.
    
    e.g.
    
    case class Data(col1: Int, col2: Int, col3: Int)
    
    val data = Seq(Seq(1, 2, 3), Seq(7, 8, 9), Seq(9, 2, 3), Seq(4, 2, 3), Seq(5, 6, 7))
    val rdd = sc.parallelize(data, data.length).map(s => new Data(s(0), s(1), s(2)))
    val dataDF = snc.createDataFrame(rdd)
    dataDF.write.format("jdbc").mode(SaveMode.Append).saveAsTable("jdbcTable")
    tableName

    Name of the table

    provider

    Provider name 'ROW' and 'JDBC'.

    schemaDDL

    Table schema as a string interpreted by provider

    options

    Properties for table creation. See options list for different tables. https://github.com/SnappyDataInc/snappydata/blob/master/docs/rowAndColumnTables.md

    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    returns

    DataFrame for the table

    Annotations
    @Experimental()
  47. def createTable(tableName: String, provider: String, schemaDDL: String, options: Map[String, String], allowExisting: Boolean): DataFrame

    Creates a Snappy managed JDBC table which takes a free format ddl string.

    Creates a Snappy managed JDBC table which takes a free format ddl string. The ddl string should adhere to syntax of underlying JDBC store. SnappyData ships with inbuilt JDBC store , which can be accessed by Row format data store. The option parameter can take connection details. Unlike SqlContext.createExternalTable this API creates a persistent catalog entry.

    val props = Map(
    "url" -> s"jdbc:derby:$path",
    "driver" -> "org.apache.derby.jdbc.EmbeddedDriver",
    "poolImpl" -> "tomcat",
    "user" -> "app",
    "password" -> "app"
    )
    
    
    val schemaDDL = "(OrderId INT NOT NULL PRIMARY KEY,ItemId INT, ITEMREF INT)"
    snappyContext.createTable("jdbcTable", "jdbc", schemaDDL, props)
    
    Any DataFrame of the same schema can be inserted into the JDBC table using
    DataFrameWriter Api.
    
    e.g.
    
    case class Data(col1: Int, col2: Int, col3: Int)
    
    val data = Seq(Seq(1, 2, 3), Seq(7, 8, 9), Seq(9, 2, 3), Seq(4, 2, 3), Seq(5, 6, 7))
    val rdd = sc.parallelize(data, data.length).map(s => new Data(s(0), s(1), s(2)))
    val dataDF = snc.createDataFrame(rdd)
    dataDF.write.format("jdbc").mode(SaveMode.Append).saveAsTable("jdbcTable")
    tableName

    Name of the table

    provider

    Provider name 'ROW' and 'JDBC'.

    schemaDDL

    Table schema as a string interpreted by provider

    options

    Properties for table creation. See options list for different tables. https://github.com/SnappyDataInc/snappydata/blob/master/docs/rowAndColumnTables.md

    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    returns

    DataFrame for the table

  48. def createTable(tableName: String, provider: String, schema: StructType, options: Map[String, String], allowExisting: Boolean): DataFrame

    Creates a Snappy managed table.

    Creates a Snappy managed table. Any relation providers (e.g. parquet, jdbc etc) supported by Spark & Snappy can be created here. Unlike SqlContext.createExternalTable this API creates a persistent catalog entry.

    case class Data(col1: Int, col2: Int, col3: Int)
    val props = Map.empty[String, String]
    val data = Seq(Seq(1, 2, 3), Seq(7, 8, 9), Seq(9, 2, 3), Seq(4, 2, 3), Seq(5, 6, 7))
    val rdd = sc.parallelize(data, data.length).map(s => new Data(s(0), s(1), s(2)))
    val dataDF = snc.createDataFrame(rdd)
    snappyContext.createTable(tableName, "column", dataDF.schema, props)
    tableName

    Name of the table

    provider

    Provider name such as 'COLUMN', 'ROW', 'JDBC', 'PARQUET' etc.

    schema

    Table schema

    options

    Properties for table creation. See options list for different tables. https://github.com/SnappyDataInc/snappydata/blob/master/docs/rowAndColumnTables.md

    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    returns

    DataFrame for the table

    Annotations
    @Experimental()
  49. def createTable(tableName: String, provider: String, schema: StructType, options: Map[String, String], allowExisting: Boolean = false): DataFrame

    Creates a Snappy managed table.

    Creates a Snappy managed table. Any relation providers (e.g. parquet, jdbc etc) supported by Spark & Snappy can be created here. Unlike SqlContext.createExternalTable this API creates a persistent catalog entry.

    case class Data(col1: Int, col2: Int, col3: Int) val props = Map.empty[String, String] val data = Seq(Seq(1, 2, 3), Seq(7, 8, 9), Seq(9, 2, 3), Seq(4, 2, 3), Seq(5, 6, 7)) val rdd = sc.parallelize(data, data.length).map(s => new Data(s(0), s(1), s(2))) val dataDF = snc.createDataFrame(rdd) snappyContext.createTable(tableName, "column", dataDF.schema, props)

    }}}

    tableName

    Name of the table

    provider

    Provider name such as 'COLUMN', 'ROW', 'JDBC', 'PARQUET' etc.

    schema

    Table schema

    options

    Properties for table creation. See options list for different tables. https://github.com/SnappyDataInc/snappydata/blob/master/docs/rowAndColumnTables.md

    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    returns

    DataFrame for the table

  50. def createTable(tableName: String, provider: String, options: Map[String, String], allowExisting: Boolean): DataFrame

    Creates a Snappy managed table.

    Creates a Snappy managed table. Any relation providers (e.g. parquet, jdbc etc) supported by Spark & Snappy can be created here. Unlike SqlContext.createExternalTable this API creates a persistent catalog entry.

    val airlineDF = snappyContext.createTable(stagingAirline, "parquet", Map("path" -> airlinefilePath))
    tableName

    Name of the table

    provider

    Provider name such as 'COLUMN', 'ROW', 'JDBC', 'PARQUET' etc.

    options

    Properties for table creation

    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    returns

    DataFrame for the table

    Annotations
    @Experimental()
  51. def createTable(tableName: String, provider: String, options: Map[String, String], allowExisting: Boolean): DataFrame

    Creates a Snappy managed table.

    Creates a Snappy managed table. Any relation providers (e.g. parquet, jdbc etc) supported by Spark & Snappy can be created here. Unlike SqlContext.createExternalTable this API creates a persistent catalog entry.

    val airlineDF = snappyContext.createTable(stagingAirline, "parquet", Map("path" -> airlinefilePath))
    tableName

    Name of the table

    provider

    Provider name such as 'COLUMN', 'ROW', 'JDBC', 'PARQUET' etc.

    options

    Properties for table creation

    allowExisting

    When set to true it will ignore if a table with the same name is present , else it will throw table exist exception

    returns

    DataFrame for the table

  52. val ddlParser: DDLParser

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  53. def delete(tableName: String, filterExpr: String): Int

    Delete all rows in table that match passed filter expression

    Delete all rows in table that match passed filter expression

    tableName

    table name

    filterExpr

    SQL WHERE criteria to select rows that will be updated

    returns

    number of rows deleted

    Annotations
    @DeveloperApi()
  54. def dialectClassName: String

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  55. def dropIndex(indexName: String, ifExists: Boolean): Unit

    Drops an index on a table

    Drops an index on a table

    indexName

    Index name which goes in catalog

    ifExists

    Drop if exists, else exit gracefully

  56. def dropTable(tableName: String, ifExists: Boolean = false): Unit

    Drop a SnappyData table created by a call to SnappyContext.

    Drop a SnappyData table created by a call to SnappyContext.createTable

    tableName

    table to be dropped

    ifExists

    attempt drop only if the table exists

  57. def dropTempTable(tableName: String): Unit

    Definition Classes
    SQLContext
  58. lazy val emptyDataFrame: DataFrame

    Definition Classes
    SQLContext
  59. lazy val emptyResult: RDD[InternalRow]

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  60. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  61. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  62. def executePlan(plan: LogicalPlan): execution.QueryExecution

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  63. def executeSql(sql: String): execution.QueryExecution

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  64. val experimental: ExperimentalMethods

    Definition Classes
    SQLContext
  65. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  66. lazy val functionRegistry: FunctionRegistry

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  67. def getAllConfs: Map[String, String]

    Definition Classes
    SQLContext
  68. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  69. def getConf(key: String, defaultValue: String): String

    Definition Classes
    SQLContext
  70. def getConf(key: String): String

    Definition Classes
    SQLContext
  71. def getSQLDialect(): ParserDialect

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  72. def getSchema(beanClass: Class[_]): Seq[AttributeReference]

    Attributes
    protected
    Definition Classes
    SQLContext
  73. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  74. def insert(tableName: String, rows: ArrayList[ArrayList[_]]): Int

    Insert one or more org.apache.spark.sql.Row into an existing table A user can insert a DataFrame using foreachPartition.

    Insert one or more org.apache.spark.sql.Row into an existing table A user can insert a DataFrame using foreachPartition...

    someDataFrame.foreachPartition (x => snappyContext.insert
    ("MyTable", x.toSeq)
    )
    tableName
    rows
    returns

    number of rows inserted

    Annotations
    @Experimental()
  75. def insert(tableName: String, rows: Row*): Int

    Insert one or more org.apache.spark.sql.Row into an existing table A user can insert a DataFrame using foreachPartition.

    Insert one or more org.apache.spark.sql.Row into an existing table A user can insert a DataFrame using foreachPartition...

    someDataFrame.foreachPartition (x => snappyContext.insert
    ("MyTable", x.toSeq)
    )
    tableName
    rows
    returns

    number of rows inserted

    Annotations
    @DeveloperApi()
  76. def isCached(tableName: String): Boolean

    Definition Classes
    SQLContext
  77. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  78. val isRootContext: Boolean

    Definition Classes
    SnappyContext → SQLContext
  79. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  80. val listener: SQLListener

    Definition Classes
    SnappyContext → SQLContext
  81. lazy val listenerManager: ExecutionListenerManager

    Definition Classes
    SQLContext
  82. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  83. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  84. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  85. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  86. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  87. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  88. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  89. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  90. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  91. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  92. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  93. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  94. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  95. def newSession(): SnappyContext

    Definition Classes
    SnappyContext → SQLContext
  96. final def notify(): Unit

    Definition Classes
    AnyRef
  97. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  98. lazy val optimizer: Optimizer

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  99. def parseDataType(dataTypeString: String): DataType

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  100. def parseSql(sql: String): LogicalPlan

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  101. val planner: execution.SparkPlanner

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SnappyContext → SQLContext
  102. val prepareForExecution: RuleExecutor[SparkPlan]

    Definition Classes
    SnappyContext → SQLContext
  103. def put(tableName: String, rows: ArrayList[ArrayList[_]]): Int

    Upsert one or more org.apache.spark.sql.Row into an existing table upsert a DataFrame using foreachPartition.

    Upsert one or more org.apache.spark.sql.Row into an existing table upsert a DataFrame using foreachPartition...

    someDataFrame.foreachPartition (x => snappyContext.put
    ("MyTable", x.toSeq)
    )
    tableName
    rows
    returns

    Annotations
    @Experimental()
  104. def put(tableName: String, rows: Row*): Int

    Upsert one or more org.apache.spark.sql.Row into an existing table upsert a DataFrame using foreachPartition.

    Upsert one or more org.apache.spark.sql.Row into an existing table upsert a DataFrame using foreachPartition...

    someDataFrame.foreachPartition (x => snappyContext.put
    ("MyTable", x.toSeq)
    )
    tableName
    rows
    returns

    Annotations
    @DeveloperApi()
  105. def queryApproxTSTopK(topK: String, startTime: Long, endTime: Long, k: Int): DataFrame

  106. def queryApproxTSTopK(topKName: String, startTime: Long, endTime: Long): DataFrame

    To do

    why do we need this method? K is optional in the above method

  107. def queryApproxTSTopK(topKName: String, startTime: String = null, endTime: String = null, k: Int = 1): DataFrame

    Fetch the topK entries in the Approx TopK synopsis for the specified time interval.

    Fetch the topK entries in the Approx TopK synopsis for the specified time interval. See _createTopK_ for how to create this data structure and associate this to a base table (i.e. the full data set). The time interval specified here should not be less than the minimum time interval used when creating the TopK synopsis.

    topKName

    - The topK structure that is to be queried.

    startTime

    start time as string of the format "yyyy-mm-dd hh:mm:ss". If passed as null, oldest interval is considered as the start interval.

    endTime

    end time as string of the format "yyyy-mm-dd hh:mm:ss". If passed as null, newest interval is considered as the last interval.

    k

    Optional. Number of elements to be queried. This is to be passed only for stream summary

    returns

    returns the top K elements with their respective frequencies between two time

    To do

    provide an example and explain the returned DataFrame. Key is the attribute stored but the value is a struct containing count_estimate, and lower, upper bounds? How many elements are returned if K is not specified?

  108. def range(start: Long, end: Long, step: Long, numPartitions: Int): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  109. def range(start: Long, end: Long): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  110. def range(end: Long): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  111. def read: DataFrameReader

    Definition Classes
    SQLContext
    Annotations
    @Experimental()
  112. def saveStream[T](stream: DStream[T], aqpTables: Seq[String], transformer: Option[(RDD[T]) ⇒ RDD[Row]])(implicit v: scala.reflect.api.JavaUniverse.TypeTag[T]): Unit

    :: DeveloperApi ::

    :: DeveloperApi ::

    T
    stream
    aqpTables
    transformer
    v
    returns

    Annotations
    @DeveloperApi()
    To do

    do we need this anymore? If useful functionality, make this private to sql package ... SchemaDStream should use the data source API? Tagging as developer API, for now

  113. def setConf(key: String, value: String): Unit

    Definition Classes
    SQLContext
  114. def setConf(props: Properties): Unit

    Definition Classes
    SQLContext
  115. val sparkContext: SparkContext

    Definition Classes
    SnappyContext → SQLContext
  116. def sql(sqlText: String): DataFrame

    Definition Classes
    SQLContext
  117. val sqlParser: SparkSQLParser

    Attributes
    protected[org.apache.spark.sql]
    Definition Classes
    SQLContext
  118. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  119. def table(tableName: String): DataFrame

    Definition Classes
    SQLContext
  120. def tableNames(databaseName: String): Array[String]

    Definition Classes
    SQLContext
  121. def tableNames(): Array[String]

    Definition Classes
    SQLContext
  122. def tables(databaseName: String): DataFrame

    Definition Classes
    SQLContext
  123. def tables(): DataFrame

    Definition Classes
    SQLContext
  124. def toString(): String

    Definition Classes
    AnyRef → Any
  125. def truncateTable(tableName: String): Unit

    Empties the contents of the table without deleting the catalog entry.

    Empties the contents of the table without deleting the catalog entry.

    tableName

    full table name to be truncated

  126. val udf: UDFRegistration

    Definition Classes
    SQLContext
  127. def uncacheTable(tableName: String): Unit

    Definition Classes
    SQLContext
  128. def update(tableName: String, filterExpr: String, newColumnValues: ArrayList[_], updateColumns: ArrayList[String]): Int

    Update all rows in table that match passed filter expression

    Update all rows in table that match passed filter expression

    snappyContext.update("jdbcTable", "ITEMREF = 3" , Row(99) , "ITEMREF" )
    tableName

    table name which needs to be updated

    filterExpr

    SQL WHERE criteria to select rows that will be updated

    newColumnValues

    A list containing all the updated column values. They MUST match the updateColumn list passed

    updateColumns

    List of all column names being updated

    returns

    Annotations
    @Experimental()
  129. def update(tableName: String, filterExpr: String, newColumnValues: Row, updateColumns: String*): Int

    Update all rows in table that match passed filter expression

    Update all rows in table that match passed filter expression

    snappyContext.update("jdbcTable", "ITEMREF = 3" , Row(99) , "ITEMREF" )
    tableName

    table name which needs to be updated

    filterExpr

    SQL WHERE criteria to select rows that will be updated

    newColumnValues

    A single Row containing all updated column values. They MUST match the updateColumn list passed

    updateColumns

    List of all column names being updated

    returns

    Annotations
    @DeveloperApi()
  130. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  131. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  132. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def applySchema(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.

  2. def applySchema(rdd: RDD[_], beanClass: Class[_]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.

  3. def applySchema(rowRDD: JavaRDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.

  4. def applySchema(rowRDD: RDD[Row], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.0) Use createDataFrame. This will be removed in Spark 2.0.

  5. def jdbc(url: String, table: String, theParts: Array[String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.jdbc(). This will be removed in Spark 2.0.

  6. def jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.jdbc(). This will be removed in Spark 2.0.

  7. def jdbc(url: String, table: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.jdbc(). This will be removed in Spark 2.0.

  8. def jsonFile(path: String, samplingRatio: Double): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  9. def jsonFile(path: String, schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  10. def jsonFile(path: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  11. def jsonRDD(json: JavaRDD[String], samplingRatio: Double): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  12. def jsonRDD(json: RDD[String], samplingRatio: Double): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  13. def jsonRDD(json: JavaRDD[String], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  14. def jsonRDD(json: RDD[String], schema: StructType): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  15. def jsonRDD(json: JavaRDD[String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  16. def jsonRDD(json: RDD[String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.json(). This will be removed in Spark 2.0.

  17. def load(source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).schema(schema).options(options).load(). This will be removed in Spark 2.0.

  18. def load(source: String, schema: StructType, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).schema(schema).options(options).load(). This will be removed in Spark 2.0.

  19. def load(source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).options(options).load(). This will be removed in Spark 2.0.

  20. def load(source: String, options: Map[String, String]): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).options(options).load(). This will be removed in Spark 2.0.

  21. def load(path: String, source: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.format(source).load(path). This will be removed in Spark 2.0.

  22. def load(path: String): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated
    Deprecated

    (Since version 1.4.0) Use read.load(path). This will be removed in Spark 2.0.

  23. def parquetFile(paths: String*): DataFrame

    Definition Classes
    SQLContext
    Annotations
    @deprecated @varargs()
    Deprecated

    (Since version 1.4.0) Use read.parquet(). This will be removed in Spark 2.0.

Inherited from SQLContext

Inherited from Serializable

Inherited from Serializable

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped