object BaggedResult

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. BaggedResult
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  6. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  7. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  8. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  9. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  10. def getInfinitesimalJackknifeMatrix(Nib: Vector[Vector[Int]]): DenseMatrix[Double]

    Generate a matrix that is useful for computing (co)variance via infinitesimal jackknife (IJ).

    Generate a matrix that is useful for computing (co)variance via infinitesimal jackknife (IJ). The central term of the IJ calculation (Wager et. al. 2014, equation 5) is the covariance between the number of times a training point appears in a bag and the prediction made by that bag. This matrix encodes (N - \bar{N})/B (B is the number of bags), so that when it is multiplied by the (# bags) x (# predictions) prediction matrix, we have a matrix of the covariance terms from equation 5.

    Nib

    The (# training) x (# bags) matrix indicating how many times each training point is used in each bag

  11. def getJackknifeAfterBootstrapMatrix(Nib: Vector[Vector[Int]]): DenseMatrix[Double]

    Generate a matrix that is useful for computing (co)variance via jackknife after bootstrap (JaB).

    Generate a matrix that is useful for computing (co)variance via jackknife after bootstrap (JaB). The central term of the JaB calculation (Wager et. al. 2014, equation 6) is the difference between the out-of-bag prediction on a point and the mean prediction on that point. If this is written as a single sum over bags, then each point has a weight -1/B when it is in-bag and weight 1/|{N_{bi}=0}| - 1/B when it is out-of-bag (B is the number of bags). This matrix encodes those weights, so when it is multiplied by the (# bags) x (# predictions) prediction matrix we have a matrix of the \Delta terms from equation 6.

    Nib

    The (# training) x (# bags) matrix indicating how many times each training point is used in each bag

  12. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  13. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  14. val logger: Logger
  15. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  16. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  17. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  18. def rectifyEstimatedVariance(scores: Seq[Double]): Double

    Make sure the variance is non-negative

    Make sure the variance is non-negative

    The monte carlo bias correction is itself stochastic, so let's make sure the result is positive

    If the sum is positive, then great! We're done.

    If the sum is <= 0.0, then the actual variance is likely quite small. We know the variance should be at least as large as the largest importance, since at least one training point will be important. Therefore, let's just take the maximum importance, which should be a reasonable lower-bound of the variance. Note that we could also sum the non-negative scores, but that could be biased upwards.

    If all of the scores are negative (which happens infrequently for very small ensembles), then we just need a scale. The largest scale is the largest magnitude score, which is the absolute value of the minimum score. When this happens, then a larger ensemble should really be used!

    If all of the treePredictions are zero, then this will return zero.

    scores

    the monte-carlo corrected importance scores

    returns

    A non-negative estimate of the variance

  19. def rectifyImportanceScores(scores: Vector[Double]): Vector[Double]

    Make sure the scores are each non-negative

    Make sure the scores are each non-negative

    The monte carlo bias correction is itself stochastic, so let's make sure the result is positive. If the score was statistically consistent with zero, then we might subtract off the entire bias correction, which results in the negative value. Therefore, we can use the magnitude of the minimum as an estimate of the noise level, and can simply set that as a floor.

    If all of the treePredictions are zero, then this will return a vector of zero

    scores

    the monte-carlo corrected importance scores

    returns

    a vector of non-negative bias corrected scores

  20. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  21. def toString(): String
    Definition Classes
    AnyRef → Any
  22. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  23. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  24. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()

Inherited from AnyRef

Inherited from Any

Ungrouped