Class OVRUtil
- java.lang.Object
-
- org.lwjgl.ovr.OVRUtil
-
public class OVRUtil extends java.lang.ObjectNative bindings to the libOVR utility functions.
-
-
Field Summary
Fields Modifier and Type Field and Description static intovrHapticsGenMode_Count
ovrHapticsGenMode_PointSampleModes used to generate Touch Haptics from audio PCM buffer.static intovrProjection_ClipRangeOpenGL
ovrProjection_FarClipAtInfinity
ovrProjection_FarLessThanNear
ovrProjection_LeftHanded
ovrProjection_NoneEnumerates modifications to the projection matrix based on the application's needs.
-
Method Summary
All Methods Static Methods Concrete Methods Modifier and Type Method and Description static voidovr_CalcEyePoses(OVRPosef headPose, OVRPosef.Buffer HmdToEyePose, OVRPosef.Buffer outEyePoses)Computes offset eye poses based onheadPosereturned byOVRTrackingState.static OVRDetectResultovr_Detect(int timeoutMilliseconds, OVRDetectResult __result)Detects Oculus Runtime and Device Status.static intovr_GenHapticsFromAudioData(OVRHapticsClip outHapticsClip, OVRAudioChannelData audioChannel, int genMode)Generates playable Touch Haptics data from an audio channel.static voidovr_GetEyePoses(long session, long frameIndex, boolean latencyMarker, OVRPosef.Buffer HmdToEyePose, OVRPosef.Buffer outEyePoses, double[] outSensorSampleTime)Array version of:_GetEyePosesstatic voidovr_GetEyePoses(long session, long frameIndex, boolean latencyMarker, OVRPosef.Buffer HmdToEyePose, OVRPosef.Buffer outEyePoses, java.nio.DoubleBuffer outSensorSampleTime)Returns the predicted head pose inoutHmdTrackingStateand offset eye poses inoutEyePoses.static intovr_ReadWavFromBuffer(OVRAudioChannelData outAudioChannel, java.nio.ByteBuffer inputData, int stereoChannelToUse)Reads an audio channel from Wav (Waveform Audio File) data.static voidovr_ReleaseAudioChannelData(OVRAudioChannelData audioChannel)Releases memory allocated for ovrAudioChannelData.static voidovr_ReleaseHapticsClip(OVRHapticsClip hapticsClip)Releases memory allocated for ovrHapticsClip.static OVRMatrix4fovrMatrix4f_OrthoSubProjection(OVRMatrix4f projection, OVRVector2f orthoScale, float orthoDistance, float HmdToEyeOffsetX, OVRMatrix4f __result)Generates an orthographic sub-projection.static OVRMatrix4fovrMatrix4f_Projection(OVRFovPort fov, float znear, float zfar, int projectionModFlags, OVRMatrix4f __result)Used to generate projection fromovrEyeDesc::Fov.static voidovrPosef_FlipHandedness(OVRPosef inPose, OVRPosef outPose)Tracking poses provided by the SDK come in a right-handed coordinate system.static OVRTimewarpProjectionDescovrTimewarpProjectionDesc_FromProjection(OVRMatrix4f projection, int projectionModFlags, OVRTimewarpProjectionDesc __result)Extracts the required data from the result ofMatrix4f_Projection.
-
-
-
Field Detail
-
ovrProjection_None, ovrProjection_LeftHanded, ovrProjection_FarLessThanNear, ovrProjection_FarClipAtInfinity, ovrProjection_ClipRangeOpenGL
Enumerates modifications to the projection matrix based on the application's needs.Enum values:
Projection_None- Use for generating a default projection matrix that is:- Right-handed.
- Near depth values stored in the depth buffer are smaller than far depth values.
- Both near and far are explicitly defined.
- With a clipping range that is (0 to w).
Projection_LeftHanded- Enable if using left-handed transformations in your application.Projection_FarLessThanNear- After the projection transform is applied, far values stored in the depth buffer will be less than closer depth values. NOTE: Enable only if the application is using a floating-point depth buffer for proper precision.Projection_FarClipAtInfinity- When this flag is used, the zfar value pushed intoMatrix4f_Projectionwill be ignored NOTE: Enable only ifProjection_FarLessThanNearis also enabled where the far clipping plane will be pushed to infinity.Projection_ClipRangeOpenGL- Enable if the application is rendering with OpenGL and expects a projection matrix with a clipping range of (-w to w). Ignore this flag if your application already handles the conversion from D3D range (0 to w) to OpenGL.
-
ovrHapticsGenMode_PointSample, ovrHapticsGenMode_Count
Modes used to generate Touch Haptics from audio PCM buffer.Enum values:
HapticsGenMode_PointSample- Point sample original signal at Haptics frequencyHapticsGenMode_Count
-
-
Method Detail
-
ovr_Detect
public static OVRDetectResult ovr_Detect(int timeoutMilliseconds, OVRDetectResult __result)
Detects Oculus Runtime and Device Status.Checks for Oculus Runtime and Oculus HMD device status without loading the LibOVRRT shared library. This may be called before
Initializeto help decide whether or not to initialize LibOVR.- Parameters:
timeoutMilliseconds- a timeout to wait for HMD to be attached or 0 to poll
-
ovrMatrix4f_Projection
public static OVRMatrix4f ovrMatrix4f_Projection(OVRFovPort fov, float znear, float zfar, int projectionModFlags, OVRMatrix4f __result)
Used to generate projection fromovrEyeDesc::Fov.- Parameters:
fov- theOVRFovPortto useznear- distance to near Z limitzfar- distance to far Z limitprojectionModFlags- a combination of theovrProjectionModifierflags. One or more of:Projection_NoneProjection_FarLessThanNearProjection_FarClipAtInfinityProjection_ClipRangeOpenGL__result- the calculated projection matrix
-
ovrTimewarpProjectionDesc_FromProjection
public static OVRTimewarpProjectionDesc ovrTimewarpProjectionDesc_FromProjection(OVRMatrix4f projection, int projectionModFlags, OVRTimewarpProjectionDesc __result)
Extracts the required data from the result ofMatrix4f_Projection.- Parameters:
projection- the project matrix from which to extractOVRTimewarpProjectionDescprojectionModFlags- a combination of the ovrProjectionModifier flags. One or more of:Projection_NoneProjection_LeftHandedProjection_FarLessThanNearProjection_FarClipAtInfinityProjection_ClipRangeOpenGL__result- the extracted ovrTimewarpProjectionDesc
-
ovrMatrix4f_OrthoSubProjection
public static OVRMatrix4f ovrMatrix4f_OrthoSubProjection(OVRMatrix4f projection, OVRVector2f orthoScale, float orthoDistance, float HmdToEyeOffsetX, OVRMatrix4f __result)
Generates an orthographic sub-projection.Used for 2D rendering, Y is down.
- Parameters:
projection- the perspective matrix that the orthographic matrix is derived fromorthoScale- equal to1.0f / pixelsPerTanAngleAtCenterorthoDistance- equal to the distance from the camera in meters, such as 0.8mHmdToEyeOffsetX- the offset of the eye from the center__result- the calculated projection matrix
-
ovr_CalcEyePoses
public static void ovr_CalcEyePoses(OVRPosef headPose, OVRPosef.Buffer HmdToEyePose, OVRPosef.Buffer outEyePoses)
Computes offset eye poses based onheadPosereturned byOVRTrackingState.- Parameters:
headPose- indicates the HMD position and orientation to use for the calculationHmdToEyePose- can beOVREyeRenderDesc.HmdToEyePosereturned fromGetRenderDesc. For monoscopic rendering, use a position vector that is average of the two position vectors for each eye .outEyePoses- ifoutEyePosesare used for rendering, they should be passed toSubmitFrameinOVRLayerEyeFov::RenderPoseorOVRLayerEyeFovDepth::RenderPose
-
ovr_GetEyePoses
public static void ovr_GetEyePoses(long session, long frameIndex, boolean latencyMarker, OVRPosef.Buffer HmdToEyePose, OVRPosef.Buffer outEyePoses, java.nio.DoubleBuffer outSensorSampleTime)Returns the predicted head pose inoutHmdTrackingStateand offset eye poses inoutEyePoses.This is a thread-safe function where caller should increment
frameIndexwith every frame and pass that index where applicable to functions called on the rendering thread. AssumingoutEyePosesare used for rendering, it should be passed as a part ofOVRLayerEyeFov. The caller does not need to worry about applyingHmdToEyePoseto the returnedoutEyePosesvariables.- Parameters:
session- anovrSessionpreviously returned byCreateframeIndex- the targeted frame index, or 0 to refer to one frame after the last timeSubmitFramewas calledlatencyMarker- Specifies that this call is the point in time where the "App-to-Mid-Photon" latency timer starts from. If a givenovrLayerprovides "SensorSampleTimestamp", that will override the value stored here.HmdToEyePose- can beOVREyeRenderDesc.HmdToEyePosereturned fromGetRenderDesc. For monoscopic rendering, use a position vector that is the average of the two position vectors for each eye.outEyePoses- the predicted eye posesoutSensorSampleTime- the time when this function was called. May be NULL, in which case it is ignored.
-
ovrPosef_FlipHandedness
public static void ovrPosef_FlipHandedness(OVRPosef inPose, OVRPosef outPose)
Tracking poses provided by the SDK come in a right-handed coordinate system. If an application is passing inProjection_LeftHandedintoMatrix4f_Projection, then it should also use this function to flip the HMD tracking poses to be left-handed.While this utility function is intended to convert a left-handed OVRPosef into a right-handed coordinate system, it will also work for converting right-handed to left-handed since the flip operation is the same for both cases.
- Parameters:
inPose- a pose that is right-handedoutPose- the pose that is requested to be left-handed (can be the same pointer toinPose)
-
ovr_ReadWavFromBuffer
public static int ovr_ReadWavFromBuffer(OVRAudioChannelData outAudioChannel, java.nio.ByteBuffer inputData, int stereoChannelToUse)
Reads an audio channel from Wav (Waveform Audio File) data.Input must be a byte buffer representing a valid Wav file. Audio samples from the specified channel are read, converted to float
[-1.0f, 1.0f]and returned throughOVRAudioChannelData.Supported formats: PCM 8b, 16b, 32b and IEEE float (little-endian only).
- Parameters:
outAudioChannel- output audio channel datainputData- a binary buffer representing a valid Wav file datastereoChannelToUse- audio channel index to extract (0 for mono)
-
ovr_GenHapticsFromAudioData
public static int ovr_GenHapticsFromAudioData(OVRHapticsClip outHapticsClip, OVRAudioChannelData audioChannel, int genMode)
Generates playable Touch Haptics data from an audio channel.- Parameters:
outHapticsClip- generated Haptics clipaudioChannel- input audio channel datagenMode- mode used to convert and audio channel data to Haptics data. Must be:HapticsGenMode_PointSample
-
ovr_ReleaseAudioChannelData
public static void ovr_ReleaseAudioChannelData(OVRAudioChannelData audioChannel)
Releases memory allocated for ovrAudioChannelData. Must be called to avoid memory leak.- Parameters:
audioChannel- pointer to an audio channel
-
ovr_ReleaseHapticsClip
public static void ovr_ReleaseHapticsClip(OVRHapticsClip hapticsClip)
Releases memory allocated for ovrHapticsClip. Must be called to avoid memory leak.- Parameters:
hapticsClip- pointer to a haptics clip
-
ovr_GetEyePoses
public static void ovr_GetEyePoses(long session, long frameIndex, boolean latencyMarker, OVRPosef.Buffer HmdToEyePose, OVRPosef.Buffer outEyePoses, double[] outSensorSampleTime)Array version of:_GetEyePoses
-
-