Packages

p

io.citrine.lolo

validation

package validation

Ordering
  1. Alphabetic
Visibility
  1. Public
  2. Protected

Type Members

  1. case class ErrorVsUncertainty(magnitude: Boolean = true) extends Visualization[Double] with Product with Serializable

    Visualization of the error compared to the predicted uncertainty

    Visualization of the error compared to the predicted uncertainty

    magnitude

    whether to plot the error or the magnitude (abs) of the error

  2. trait Merit[T] extends AnyRef

    Real-valued figure of merit on predictions of type T

  3. case class PredictedVsActual() extends Visualization[Double] with Product with Serializable

    Plot the predicted value vs the actual value, with predicted uncertainty as error bars

  4. case class StandardError(rescale: Double = 1.0) extends Merit[Double] with Product with Serializable

    Root mean square of (the error divided by the predicted uncertainty)

  5. case class StandardResidualHistogram(nBins: Int = 128, range: Double = 8.0, fitGaussian: Boolean = true, fitCauchy: Boolean = true) extends Visualization[Double] with Product with Serializable

    Histogram of the error divided by the predicted uncertainty

    Histogram of the error divided by the predicted uncertainty

    Gaussian and Cauchy fits are preformed via quantiles:

    • standard deviation is taken as the 68th percentile standard error
    • gamma is taken as the 50th percentile standard error
    nBins

    number of bins in the histogram

    range

    of the horizontal axis, e.g. x \in [-range/2, range/2]

    fitGaussian

    whether to fit and plot a Gaussian distribution

    fitCauchy

    whether to fit and plot a Cauchy distribution

  6. case class StatisticalValidation(rng: Random = Random) extends Product with Serializable

    Methods that draw data from a distribution and compute predicted-vs-actual data

  7. trait Visualization[T] extends AnyRef

    Visualization on predicted vs actual data of type T

Value Members

  1. case object CoefficientOfDetermination extends Merit[Double] with Product with Serializable

    R2 = 1 - MSE(y) / Var(y), where y is the predicted variable

  2. case object CrossValidation extends Product with Serializable

    Methods tha use cross-validation to calculate predicted-vs-actual data and metric estimates

  3. object Merit
  4. case object RootMeanSquareError extends Merit[Double] with Product with Serializable

    Square root of the mean square error.

    Square root of the mean square error. For an unbiased estimator, this is equal to the standard deviation of the difference between predicted and actual values.

  5. case object StandardConfidence extends Merit[Double] with Product with Serializable

    The fraction of predictions that fall within the predicted uncertainty

  6. case object UncertaintyCorrelation extends Merit[Double] with Product with Serializable

    Measure of the correlation between the predicted uncertainty and error magnitude

    Measure of the correlation between the predicted uncertainty and error magnitude

    This is expressed as a ratio of correlation coefficients. The numerator is the correlation coefficient of the predicted uncertainty and the actual error magnitude. The denominator is the correlation coefficient of the predicted uncertainty and the ideal error distribution. That is: let X be the predicted uncertainty and Y := N(0, x) be the ideal error distribution about each predicted uncertainty x. It is the correlation coefficient between X and Y In the absence of a closed form for that coefficient, it is model empirically by drawing from N(0, x) to produce an "ideal" error series from which the correlation coefficient can be estimated.

Ungrouped