package validation
- Alphabetic
- Public
- Protected
Type Members
- case class ErrorVsUncertainty(magnitude: Boolean = true) extends Visualization[Double] with Product with Serializable
Visualization of the error compared to the predicted uncertainty
Visualization of the error compared to the predicted uncertainty
- magnitude
whether to plot the error or the magnitude (abs) of the error
- trait Merit[T] extends AnyRef
Real-valued figure of merit on predictions of type T
- case class PredictedVsActual() extends Visualization[Double] with Product with Serializable
Plot the predicted value vs the actual value, with predicted uncertainty as error bars
- case class StandardError(rescale: Double = 1.0) extends Merit[Double] with Product with Serializable
Root mean square of (the error divided by the predicted uncertainty)
- case class StandardResidualHistogram(nBins: Int = 128, range: Double = 8.0, fitGaussian: Boolean = true, fitCauchy: Boolean = true) extends Visualization[Double] with Product with Serializable
Histogram of the error divided by the predicted uncertainty
Histogram of the error divided by the predicted uncertainty
Gaussian and Cauchy fits are preformed via quantiles:
- standard deviation is taken as the 68th percentile standard error
- gamma is taken as the 50th percentile standard error
- nBins
number of bins in the histogram
- range
of the horizontal axis, e.g. x \in [-range/2, range/2]
- fitGaussian
whether to fit and plot a Gaussian distribution
- fitCauchy
whether to fit and plot a Cauchy distribution
- case class StatisticalValidation(rng: Random = Random) extends Product with Serializable
Methods that draw data from a distribution and compute predicted-vs-actual data
- trait Visualization[T] extends AnyRef
Visualization on predicted vs actual data of type T
Value Members
- case object CoefficientOfDetermination extends Merit[Double] with Product with Serializable
R2 = 1 - MSE(y) / Var(y), where y is the predicted variable
- case object CrossValidation extends Product with Serializable
Methods tha use cross-validation to calculate predicted-vs-actual data and metric estimates
- object Merit
- case object RootMeanSquareError extends Merit[Double] with Product with Serializable
Square root of the mean square error.
Square root of the mean square error. For an unbiased estimator, this is equal to the standard deviation of the difference between predicted and actual values.
- case object StandardConfidence extends Merit[Double] with Product with Serializable
The fraction of predictions that fall within the predicted uncertainty
- case object UncertaintyCorrelation extends Merit[Double] with Product with Serializable
Measure of the correlation between the predicted uncertainty and error magnitude
Measure of the correlation between the predicted uncertainty and error magnitude
This is expressed as a ratio of correlation coefficients. The numerator is the correlation coefficient of the predicted uncertainty and the actual error magnitude. The denominator is the correlation coefficient of the predicted uncertainty and the ideal error distribution. That is: let X be the predicted uncertainty and Y := N(0, x) be the ideal error distribution about each predicted uncertainty x. It is the correlation coefficient between X and Y In the absence of a closed form for that coefficient, it is model empirically by drawing from N(0, x) to produce an "ideal" error series from which the correlation coefficient can be estimated.