
public static enum JNIMemoryManager.MemoryModel extends Enum<JNIMemoryManager.MemoryModel>
JNIMemoryManager.MemoryModel
you use in Ferry libraries can have a big effect. Some models emphasize
code that will work "as you expect" (Robustness), but sacrifice some
execution speed to make that happen. Other models value speed first, and
assume you know what you're doing and can manage your own memory.
In our experience the set of people who need robust software is larger than the set of people who need the (small) speed price paid, and so we default to the most robust model.
Also in our experience, the set of people who really should just use the
robust model, but instead think they need speed is much larger than the set
of people who actually know what they're doing with java memory management,
so please, we strongly recommend you start with a robust model and
only change the JNIMemoryManager.MemoryModel if your performance testing shows you
need speed. Don't say we didn't warn you.
| Model | Robustness | Speed |
|---|---|---|
JAVA_STANDARD_HEAP (default) |
+++++ | + |
JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION |
+++ | ++ |
NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION |
+++ | +++ |
JAVA_DIRECT_BUFFERS (not recommended) |
+ | ++++ |
NATIVE_BUFFERS |
+ | +++++ |
Ferry objects have to allocate native memory to do their job -- it's the reason for Ferry's existence. And native memory management is very different than Java memory management (for example, native C++ code doesn't have a garbage collector). To make things easier for our Java friends, Ferry tries to make Ferry objects look like Java objects.
Which leads us to robustness. The more of these criteria we can hit with a
JNIMemoryManager.MemoryModel the more robust it is.
make() must
correctly allocate memory that can be accessed from native or Java code and
calls to delete() must release that memory immediately.Speed is how fast code executes under normal operating conditions. This is more subjective than it sounds, as how do you define normal operation conditions? But in general, we define it as "generally plenty of heap space available"
Every object that is exposed from native code inherits from
RefCounted.
Ferry works by implementing a reference-counted memory management scheme in native code that is then manipulated from Java so you don't have to (usually) think about when to release native memory. Every time an object is created in native memory it has its reference count incremented by one; and everywhere inside the code we take care to release a reference when we're done.
This maps nicely to the Java model of memory management, but with the benefit that Java does all the releasing behind the scenes. When you pass an object from Native code to Java, Ferry makes sure it has a reference count incremented, and then when the Java Virtual Machine collects the instance, Ferry automatically decrements the reference it in native code.
In fact, in theory all you need to do is make a finalize() method on the Java object that decrements the reference count in the native code and everyone goes home happy.
So far so good, but it brings up a big problem:
RefCounted implementation solves all these
problems for you.
How you ask:
In the event you need to manage memory more expicitly, every Ferry object has a "copyReference()" method that will create a new Java object that points to the same underlying native object.
And In the unlikely event you want to control EXACTLY when a native object
is released, each Ferry object has a RefCounted.delete() method
that you can use. Once you call "delete()", you must ENSURE your object is
never referenced again from that Java object -- Ferry tries to help you
avoid crashes if you accidentally use an object after deletion but on this
but we cannot offer 100% protection (specifically if another thread is
accessing that object EXACTLY when you RefCounted.delete() it). If
you don't call RefCounted.delete(), we will call it at some point
in the future, but you can't depend on when (and depending on the
JNIMemoryManager.MemoryModel you are using, we may not be able to do it promptly).
Well, it means if you're first writing code, don't worry about this. If you're instead trying to optimize for performance, first measure where your problems are, and if fingers are pointing at allocation in Ferry then start trying different models.
But before you switch models, be sure to read the caveats and restrictions
on each of the non JAVA_STANDARD_HEAP models, and make sure you
have a good understanding of how Java
Garbage Collection works.
| Enum Constant and Description |
|---|
JAVA_DIRECT_BUFFERS
Large memory blocks are allocated as Direct
ByteBuffer objects
(as returned from ByteBuffer.allocateDirect(int)). |
JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION
Large memory blocks are allocated as Direct
ByteBuffer objects
(as returned from ByteBuffer.allocateDirect(int)), but the Java
standard-heap is informed of the allocation by also attempting to
quickly allocate (and release) a buffer of the same size on the standard
heap.. |
JAVA_STANDARD_HEAP
Large memory blocks are allocated in Java byte[] arrays, and passed back
into native code.
|
NATIVE_BUFFERS
Large memory blocks are allocated in native memory, completely bypassing
the Java heap.
|
NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION
Large memory blocks are allocated in native memory, completely bypassing
the Java heap, but Java is informed of the allocation by briefly
creating (and immediately releasing) a Java standard heap byte[] array of
the same size.
|
| Modifier and Type | Method and Description |
|---|---|
int |
getNativeValue()
Get the native value to pass to native code
|
static JNIMemoryManager.MemoryModel |
valueOf(String name)
Returns the enum constant of this type with the specified name.
|
static JNIMemoryManager.MemoryModel[] |
values()
Returns an array containing the constants of this enum type, in
the order they are declared.
|
public static final JNIMemoryManager.MemoryModel JAVA_STANDARD_HEAP
Large memory blocks are allocated in Java byte[] arrays, and passed back into native code. Releasing of underlying native resources happens behind the scenes with no management required on the programmer's part.
This is the slowest model available.
The main decrease in speed occurs for medium-life-span objects. Short life-span objects (objects that die during the life-span of an incremental collection) are relatively efficient. Once an object makes it into the Tenured generation in Java, then unnecessary copying stops until the next full collection.
However while in the Eden generation but surviving between incremental collections, large native buffers may get copied many times unnecessarily. This copying can have a significant performance impact.
delete() is called, the item is marked for collection, or
we're in Low Memory conditions and the item is unused.When using this model, these tips may increase performance, although in some situations, may instead decrease your performance. Always measure.
-XX:+UseParallelGCThe concurrent garbage collector works well too. To use that pass these options to java on startup:
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
delete() on every RefCounted object when done with
your objects to let Java know it doesn't need to copy the item across a
collection. You can also use copyReference() to get a new
Java version of the same Ferry object that you can pass to another thread
if you don't know when delete() can be safely called.JNIMemoryManager.MemoryModel.public static final JNIMemoryManager.MemoryModel JAVA_DIRECT_BUFFERS
ByteBuffer objects
(as returned from ByteBuffer.allocateDirect(int)).
This model is not recommended. It is faster than
JAVA_STANDARD_HEAP, but because of how Sun implements direct
buffers, it works poorly in low memory conditions. This model has all the
caveats of the NATIVE_BUFFERS model, but allocation is slightly
slower.
This is the 2nd fastest model available. In tests it is generally 20-30%
faster than the JAVA_STANDARD_HEAP model.
It is using Java to allocate direct memory, which is slightly slower than
using NATIVE_BUFFERS, but much faster than using the
JAVA_STANDARD_HEAP model.
The downside is that for high-performance applications, you may need to
explicitly manage RefCounted object life-cycles with
RefCounted.delete() to ensure direct memory is released in a
timely manner.
delete() is called, or when the item is marked for
collectionWhen using this model, these tips may increase performance, although in some situations, may instead decrease performance. Always measure.
-XX:MaxDirectMemorySize=<size>
OutOfMemoryError exceptions. Objects that are allocated in native
memory have a small proxy object representing them in the Java Heap. By
decreasing your heap size, those proxy objects will exert more collection
pressure, and hopefully cause Java to do incremental collections more
often (and notice your unused objects). To set the maximum size of your
java heap, pass this option to java on startup:
-Xmx<size>To change the minimum size of your java heap, pass this option to java on startup:
-Xms<size>
-XX:+UseParallelGCThe concurrent garbage collector works well too. To use that pass these options to java on startup:
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
delete() on every RefCounted object when done with
your objects to let Java know it doesn't need to copy the item across a
collection. You can also use copyReference() to get a new
Java version of the same Ferry object that you can pass to another thread
if you don't know when delete() can be safely called.JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION
model.public static final JNIMemoryManager.MemoryModel JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION
ByteBuffer objects
(as returned from ByteBuffer.allocateDirect(int)), but the Java
standard-heap is informed of the allocation by also attempting to
quickly allocate (and release) a buffer of the same size on the standard
heap..
This model can work well if your application is mostly single-threaded, and your Ferry application is doing most of the memory allocation in your program. The trick of informing Java will put pressure on the JVM to collect appropriately, but by not keeping the references we avoid unnecessary copying for objects that survive collections.
This heuristic is not failsafe though, and can still lead to collections not occurring at the right time for some applications.
It is similar to the
NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION model and in
general we recommend that model over this one.
This model trades off some robustness for some speed. In tests it is
generally 10-20% faster than the JAVA_STANDARD_HEAP model.
It is worth testing as a way of avoiding the explicit memory management
needed to effectively use the JAVA_DIRECT_BUFFERS model.
However, the heuristic used is not fool-proof, and therefore may
sometimes lead to unnecessary collection or OutOfMemoryError
because Java didn't collect unused references in the standard heap in
time (and hence did not release underlying native references).
delete() is called, or when the item is marked for
collection. Collections happen more frequently than under the
JAVA_DIRECT_BUFFERS model due to informing the standard
heap at allocation time.OutOfMemoryError errors on the Direct heap.When using this model, these tips may increase performance, although in some situations, may instead decrease performance. Always measure.
-XX:MaxDirectMemorySize=<size>
OutOfMemoryError exceptions. Objects that are allocated in native
memory have a small proxy object representing them in the Java Heap. By
decreasing your heap size, those proxy objects will exert more collection
pressure, and hopefully cause Java to do incremental collections more
often (and notice your unused objects). To set the maximum size of your
java heap, pass this option to java on startup:
-Xmx<size>To change the minimum size of your java heap, pass this option to java on startup:
-Xms<size>
-XX:+UseParallelGCThe concurrent garbage collector works well too. To use that pass these options to java on startup:
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
delete() on every RefCounted object when done with
your objects to let Java know it doesn't need to copy the item across a
collection. You can also use copyReference() to get a new
Java version of the same Ferry object that you can pass to another thread
if you don't know when delete() can be safely called.JAVA_STANDARD_HEAP model.public static final JNIMemoryManager.MemoryModel NATIVE_BUFFERS
It is much faster than the JAVA_STANDARD_HEAP,
but much less robust.
This is the fastest model available. In tests it is generally 30-40%
faster than the JAVA_STANDARD_HEAP model.
It is using the native operating system to allocate direct memory, which
is slightly faster than using JAVA_DIRECT_BUFFERS, and much
faster than using the JAVA_STANDARD_HEAP model.
The downside is that for high-performance applications, you may need to
explicitly manage RefCounted object life-cycles with
RefCounted.delete() to ensure native memory is released in a
timely manner.
make and releasing objects with RefCounted.delete()
works like normal, but because Java has no idea of how much space is
actually allocated in native memory, it may not collect
RefCounted objects as quickly as you need it to (it will
eventually collect and free all references though).delete() is called, or when the item is marked for
collectionRefCounted objects you created surviving longer than you want to,
and therefore not releasing native memory in a timely fashion.When using this model, these tips may increase performance, although in some situations, may instead decrease performance. Always measure.
OutOfMemoryError exceptions. Objects that are allocated in native
memory have a small proxy object representing them in the Java Heap. By
decreasing your heap size, those proxy objects will exert more collection
pressure, and hopefully cause Java to do incremental collections more
often (and notice your unused objects). To set the maximum size of your
java heap, pass this option to java on startup:
-Xmx<size>To change the minimum size of your java heap, pass this option to java on startup:
-Xms<size>
-XX:+UseParallelGCThe concurrent garbage collector works well too. To use that pass these options to java on startup:
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
JNIMemoryManager.startCollectionThread() method to
start up a thread dedicated to releasing objects as soon as they are
enqued in a ReferenceQueue, rather than (the default) waiting for
the next Ferry allocation or JNIMemoryManager.collect() explicit
call. Or periodically call JNIMemoryManager.collect() yourself.delete() on every RefCounted object when done with
your objects to let Java know it doesn't need to copy the item across a
collection. You can also use copyReference() to get a new
Java version of the same Ferry object that you can pass to another thread
if you don't know when delete() can be safely called.NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION model.
public static final JNIMemoryManager.MemoryModel NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION
It is faster than the JAVA_STANDARD_HEAP, but less robust.
This model can work well if your application is mostly single-threaded, and your Ferry application is doing most of the memory allocation in your program. The trick of informing Java will put pressure on the JVM to collect appropriately, but by not keeping the references to the byte[] array we temporarily allocate, we avoid unnecessary copying for objects that survive collections.
This heuristic is not failsafe though, and can still lead to collections not occurring at the right time for some applications.
It is similar to the
JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION model.
In tests this model is generally 25-30% faster than the
JAVA_STANDARD_HEAP model.
It is using the native operating system to allocate direct memory, which
is slightly faster than using
JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION, and much
faster than using the JAVA_STANDARD_HEAP model.
It is worth testing as a way of avoiding the explicit memory management
needed to effectively use the NATIVE_BUFFERS model. However, the
heuristic used is not fool-proof, and therefore may sometimes lead to
unnecessary collection or OutOfMemoryError because Java didn't
collect unused references in the standard heap in time (and hence did not
release underlying native references).
delete() is called, or when the item is marked for
collection. Collections happen more frequently than under the
NATIVE_BUFFERS model due to informing the standard heap
at allocation time.OutOfMemoryError errors on the native heap.When using this model, these tips may increase performance, although in some situations, may instead decrease performance. Always measure.
OutOfMemoryError exceptions. Objects that are allocated in native
memory have a small proxy object representing them in the Java Heap. By
decreasing your heap size, those proxy objects will exert more collection
pressure, and hopefully cause Java to do incremental collections more
often (and notice your unused objects). To set the maximum size of your
java heap, pass this option to java on startup:
-Xmx<size>To change the minimum size of your java heap, pass this option to java on startup:
-Xms<size>
-XX:+UseParallelGCThe concurrent garbage collector works well too. To use that pass these options to java on startup:
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
JNIMemoryManager.startCollectionThread() method to
start up a thread dedicated to releasing objects as soon as they are
enqued in a ReferenceQueue, rather than (the default) waiting for
the next Ferry allocation or JNIMemoryManager.collect() explicit
call. Or periodically call JNIMemoryManager.collect() yourself.delete() on every RefCounted object when done with
your objects to let Java know it doesn't need to copy the item across a
collection. You can also use copyReference() to get a new
Java version of the same Ferry object that you can pass to another thread
if you don't know when delete() can be safely called.NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION model.
public static JNIMemoryManager.MemoryModel[] values()
for (JNIMemoryManager.MemoryModel c : JNIMemoryManager.MemoryModel.values()) System.out.println(c);
public static JNIMemoryManager.MemoryModel valueOf(String name)
name - the name of the enum constant to be returned.IllegalArgumentException - if this enum type has no constant with the specified nameNullPointerException - if the argument is nullpublic int getNativeValue()
Copyright © 2018 Humble Software. All rights reserved.