object BaggedResult
- Alphabetic
- By Inheritance
- BaggedResult
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def getInfinitesimalJackknifeMatrix(Nib: Vector[Vector[Int]]): DenseMatrix[Double]
Generate a matrix that is useful for computing (co)variance via infinitesimal jackknife (IJ).
Generate a matrix that is useful for computing (co)variance via infinitesimal jackknife (IJ). The central term of the IJ calculation (Wager et. al. 2014, equation 5) is the covariance between the number of times a training point appears in a bag and the prediction made by that bag. This matrix encodes (N - \bar{N})/B (B is the number of bags), so that when it is multiplied by the (# bags) x (# predictions) prediction matrix, we have a matrix of the covariance terms from equation 5.
- Nib
The (# training) x (# bags) matrix indicating how many times each training point is used in each bag
- def getJackknifeAfterBootstrapMatrix(Nib: Vector[Vector[Int]]): DenseMatrix[Double]
Generate a matrix that is useful for computing (co)variance via jackknife after bootstrap (JaB).
Generate a matrix that is useful for computing (co)variance via jackknife after bootstrap (JaB). The central term of the JaB calculation (Wager et. al. 2014, equation 6) is the difference between the out-of-bag prediction on a point and the mean prediction on that point. If this is written as a single sum over bags, then each point has a weight -1/B when it is in-bag and weight 1/|{N_{bi}=0}| - 1/B when it is out-of-bag (B is the number of bags). This matrix encodes those weights, so when it is multiplied by the (# bags) x (# predictions) prediction matrix we have a matrix of the \Delta terms from equation 6.
- Nib
The (# training) x (# bags) matrix indicating how many times each training point is used in each bag
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val logger: Logger
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def rectifyEstimatedVariance(scores: Seq[Double]): Double
Make sure the variance is non-negative
Make sure the variance is non-negative
The monte carlo bias correction is itself stochastic, so let's make sure the result is positive
If the sum is positive, then great! We're done.
If the sum is <= 0.0, then the actual variance is likely quite small. We know the variance should be at least as large as the largest importance, since at least one training point will be important. Therefore, let's just take the maximum importance, which should be a reasonable lower-bound of the variance. Note that we could also sum the non-negative scores, but that could be biased upwards.
If all of the scores are negative (which happens infrequently for very small ensembles), then we just need a scale. The largest scale is the largest magnitude score, which is the absolute value of the minimum score. When this happens, then a larger ensemble should really be used!
If all of the treePredictions are zero, then this will return zero.
- scores
the monte-carlo corrected importance scores
- returns
A non-negative estimate of the variance
- def rectifyImportanceScores(scores: Vector[Double]): Vector[Double]
Make sure the scores are each non-negative
Make sure the scores are each non-negative
The monte carlo bias correction is itself stochastic, so let's make sure the result is positive. If the score was statistically consistent with zero, then we might subtract off the entire bias correction, which results in the negative value. Therefore, we can use the magnitude of the minimum as an estimate of the noise level, and can simply set that as a floor.
If all of the treePredictions are zero, then this will return a vector of zero
- scores
the monte-carlo corrected importance scores
- returns
a vector of non-negative bias corrected scores
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()