case class StatisticalValidation(rng: Random = Random) extends Product with Serializable
Methods that draw data from a distribution and compute predicted-vs-actual data
- Alphabetic
- By Inheritance
- StatisticalValidation
- Serializable
- Product
- Equals
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new StatisticalValidation(rng: Random = Random)
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- def generativeValidation[T](source: Iterable[(Vector[Any], T)], learner: Learner, nTrain: Int, nTest: Int, nRound: Int): Iterator[(PredictionResult[T], Seq[T])]
Generate predicted-vs-actual data given a source of ground truth data and a learner
Generate predicted-vs-actual data given a source of ground truth data and a learner
Each predicted-vs-actual set (i.e. item in the returned iterable) comes from:
- Drawing nTrain points from the source iterator
- Training the learner on those nTrain points
- Drawing nTest more points to form a test set
- Applying the model to the test set inputs, and zipping with the test set ground truth responses which is repeated nRound times
- T
type of the model
- source
of the training and test data
- learner
to validate
- nTrain
size of each training set
- nTest
size of each test set
- nRound
number of train/test sets to draw and evaluate
- returns
predicted-vs-actual data that can be fed into a metric or visualization
- def generativeValidation[T](source: Iterator[(Vector[Any], T)], learner: Learner, nTrain: Int, nTest: Int, nRound: Int): Iterator[(PredictionResult[T], Seq[T])]
Generate predicted-vs-actual data given a source of ground truth data and a learner
Generate predicted-vs-actual data given a source of ground truth data and a learner
Each predicted-vs-actual set (i.e. item in the returned iterable) comes from:
- Drawing nTrain points from the source iterator
- Training the learner on those nTrain points
- Drawing nTest more points to form a test set
- Applying the model to the test set inputs, and zipping with the test set ground truth responses which is repeated nRound times
- T
type of the model
- source
of the training and test data
- learner
to validate
- nTrain
size of each training set
- nTest
size of each test set
- nRound
number of train/test sets to draw and evaluate
- returns
predicted-vs-actual data that can be fed into a metric or visualization
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def productElementNames: Iterator[String]
- Definition Classes
- Product
- val rng: Random
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()