Andy Malakov software blog

Wednesday, November 11, 2009

Stack allocation in Java is still a myth

There were rumors that Mustang will have on stack allocation as a part of hot spot optimization.

Four+ years later this :


public long encode (String input) {
final byte [] buffer = new byte [8];

.. encode input into buffer

.. convert buffer into LONG

return result;
}


I was hoping JVM will allocate buffer on stack based on the fact that it does not escape from this method. Running this test 10M times with -verbosegc shows extensive GC work (1.6.0_16-b01 64 bit server JVM with -XX:+DoEscapeAnalysis option).



On the positive side. GC is very fast. Consider three functions:

  1. encodeNewBuffer () uses new byte array to encode input string.

  2. encodeSynchronizedField () uses private field, guarded by synchronized{} block

  3. encodeThreadLocalField () uses ThreadLocal cache to encode input string



Here is the code:

long encodeNewBuffer (String input) {
final byte [] buffer = new byte [8];

return f (buffer);
}

/////////////

private final byte [] buffer = new byte [8];

synchronized long encodeSynchronizedField (String input) {
return f (buffer);
}

/////////////

ThreadLocal<byte[]> threadLocal = new ThreadLocal<byte[]>();
{
threadLocal.set(new byte [8]);
}

long encodeThreadLocalField (String input) {
byte [] buffer = threadLocal.get();
return f (buffer);
}


GC-based method is a winner:


encodeNewBuffer(): 4,108 sec.
encodeSynchronizedField(): 5,322 sec.
encodeThreadLocalField() : 5,411 sec