object RDFGraphLoader
A class that provides methods to load an RDF graph from disk.
- Alphabetic
- By Inheritance
- RDFGraphLoader
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native() @IntrinsicCandidate()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @IntrinsicCandidate()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @IntrinsicCandidate()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
loadFromDisk(session: SparkSession, path: URI, minPartitions: Int): RDFGraph
Load an RDF graph from a single file or directory.
Load an RDF graph from a single file or directory.
- session
the Spark session
- path
the path to the file or directory
- minPartitions
min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)
- returns
an RDF graph
-
def
loadFromDisk(session: SparkSession, paths: Seq[URI], minPartitions: Int): RDFGraph
Load an RDF graph from multiple files or directories.
Load an RDF graph from multiple files or directories.
- session
the Spark session
- paths
the files or directories
- minPartitions
min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)
- returns
an RDF graph
-
def
loadFromDisk(session: SparkSession, path: String, minPartitions: Int = 2): RDFGraph
Load an RDF graph from a file or directory.
Load an RDF graph from a file or directory. The path can also contain multiple paths and even wildcards, e.g.
"/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"- session
the Spark session
- path
the absolute path of the file
- minPartitions
min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)
- returns
an RDF graph
-
def
loadFromDiskAsDataFrame(session: SparkSession, path: String, minPartitions: Int = 4, sqlSchema: SQLSchema = SQLSchemaDefault): RDFGraphDataFrame
Load an RDF graph from a file or directory with a Spark DataFrame as underlying datastructure.
Load an RDF graph from a file or directory with a Spark DataFrame as underlying datastructure. The path can also contain multiple paths and even wildcards, e.g.
"/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"- session
the Spark session
- path
the absolute path of the file
- minPartitions
min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)
- returns
an RDF graph based on a org.apache.spark.sql.DataFrame
-
def
loadFromDiskAsDataset(session: SparkSession, paths: Seq[URI]): RDFGraphDataset
Load an RDF graph from from from a file or directory with a Spark Dataset as underlying datastructure.
Load an RDF graph from from from a file or directory with a Spark Dataset as underlying datastructure. The path can also contain multiple paths and even wildcards, e.g.
"/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"- session
the Spark session
- paths
the absolute path of the file
- returns
an RDF graph based on a Dataset
-
def
loadFromDiskAsDataset(session: SparkSession, path: String): RDFGraphDataset
Load an RDF graph from from multiple files or directories with a Spark Dataset as underlying datastructure.
Load an RDF graph from from multiple files or directories with a Spark Dataset as underlying datastructure. The path can also contain multiple paths and even wildcards, e.g.
"/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"- session
the Spark session
- path
the absolute path of the file
- returns
an RDF graph based on a Dataset
-
def
loadFromDiskAsRDD(session: SparkSession, path: String, minPartitions: Int): RDFGraphNative
Load an RDF graph from a file or directory with a Spark RDD as underlying datastructure.
Load an RDF graph from a file or directory with a Spark RDD as underlying datastructure. The path can also contain multiple paths and even wildcards, e.g. "/my/dir1,/my/paths/part-00[0-5]*,/another/dir,/a/specific/file"
- session
the Spark session
- path
the files
- minPartitions
min number of partitions for Hadoop RDDs (SparkContext.defaultMinPartitions)
- returns
an RDF graph
- def main(args: Array[String]): Unit
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @IntrinsicCandidate()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @IntrinsicCandidate()
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
Deprecated Value Members
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] ) @Deprecated
- Deprecated