Uses of Class
net.sansa_stack.spark.io.rdf.input.api.HadoopInputData
Packages that use HadoopInputData
Package
Description
-
Uses of HadoopInputData in net.sansa_stack.spark.io.csv.input
Methods in net.sansa_stack.spark.io.csv.input that return HadoopInputDataModifier and TypeMethodDescriptionstatic HadoopInputData<org.apache.hadoop.io.LongWritable,String[], org.apache.spark.api.java.JavaRDD<org.apache.jena.sparql.engine.binding.Binding>> CsvDataSources.configureHadoop(org.apache.hadoop.conf.Configuration conf, String pathStr, org.aksw.commons.model.csvw.univocity.UnivocityCsvwConf baseCsvConf, List<String> columnNamingSchemes, Function<String[][], Function<String[], org.apache.jena.sparql.engine.binding.Binding>> rowMapperFactory) -
Uses of HadoopInputData in net.sansa_stack.spark.io.json.input
Methods in net.sansa_stack.spark.io.json.input that return HadoopInputDataModifier and TypeMethodDescriptionstatic HadoopInputData<org.apache.hadoop.io.LongWritable,com.google.gson.JsonElement, org.apache.spark.api.java.JavaRDD<com.google.gson.JsonElement>> static HadoopInputData<org.apache.hadoop.io.LongWritable,com.google.gson.JsonElement, org.apache.spark.api.java.JavaRDD<com.google.gson.JsonElement>> JsonDataSources.jsonSequence(String filename, org.apache.hadoop.conf.Configuration conf) static HadoopInputData<org.apache.hadoop.io.LongWritable,com.google.gson.JsonElement, org.apache.spark.api.java.JavaRDD<com.google.gson.JsonElement>> JsonDataSources.probeJsonInputFormat(String filename, org.apache.hadoop.conf.Configuration conf, int probeCount) -
Uses of HadoopInputData in net.sansa_stack.spark.io.rdf.input.api
Methods in net.sansa_stack.spark.io.rdf.input.api that return HadoopInputDataModifier and TypeMethodDescription<Y> HadoopInputData<K,V, Y> Return a freshHadoopInputDatainstance where "nextMapper" is applied to the result of the current mapperstatic HadoopInputData<org.apache.hadoop.io.LongWritable,org.apache.jena.rdf.model.Resource, org.apache.spark.api.java.JavaRDD<org.apache.jena.rdf.model.Model>> InputFormatUtils.wrapWithAnalyzer(HadoopInputData<?, ?, ?> hid) Wrap an input format that is based onRecordReaderGenericBasewith an analyzer that turns each split into parsing metadata rather than data.Methods in net.sansa_stack.spark.io.rdf.input.api with parameters of type HadoopInputDataModifier and TypeMethodDescriptionstatic <K,V, X> X InputFormatUtils.createRdd(org.apache.spark.api.java.JavaSparkContext sc, HadoopInputData<K, V, X> inputData) static HadoopInputData<org.apache.hadoop.io.LongWritable,org.apache.jena.rdf.model.Resource, org.apache.spark.api.java.JavaRDD<org.apache.jena.rdf.model.Model>> InputFormatUtils.wrapWithAnalyzer(HadoopInputData<?, ?, ?> hid) Wrap an input format that is based onRecordReaderGenericBasewith an analyzer that turns each split into parsing metadata rather than data.