automatic heap sizing

Thus spake Joel Uckelman:

I’ve been thinking about this again, in light of the contentous debate
happening on ConsimWorld about heap sizing.

It’s easy to check how much memory would be used if all of the images in a
module in it were loaded at once. It took me about 10 minutes to write a
program which does this. So, I can find out that, e.g., the images in the
Clash of Monarchs module amount to 152MB in memory, and that Combat
Commander: Europe with all of the extensions loaded weighs in at 1896MB.

For CoM, that number might be pretty close to the minimum memory you need
to use the module, but the number for CC:E is nonsense—it includes every
map, and every card in every deck, despite that you only play with one map
and two decks at a time.

This problem is even worse for games where the same maps are not setup for
each scenario, or where the user picks which ones he wants via the board
chooser. For CC:E, our knowledge of the game tells us that we never need
to load more than one map at a time; for these cases, I have no idea what
we could do to analyze the module automatically to determine a minimum
feasible max heap size.

I wonder if we could instead do something adaptive. Two things we can
track esaily are OutOfMemoryExceptions and cache misses. It’s clear that
the heap is too small for what’s being put into it when an OOME happens,
so in that case we could automatically turn on memory-mapped images or
increase the max heap size somewhat. (Maybe increase by 10% on each OOME,
so that we’ll find something which works without overshooting too much?)

Frequent cache misses are the other symptom of having a heap which is too
small. Some image cache misses are necessary—every image requested will
be a miss the first time—and most machines will have sufficient spare
CPU cycles that occasional cache misses won’t be noticeable for the user.
If cache misses become frequent, however, the cache might start to thrash.
E.g., I’ve found that it’s possible to get the Case Blue module to get
into a load-clear-load cycle if you give it just enough heap to load one
or two maps, but not enough to load more, and then you scroll to a place
where several maps come together. Cache misses are something we can
measure. It wouldn’t be hard to modify the image cache to track how many
non-initial misses are happening per second. The only trick here is
figuring out if there’s a number of misses per second which indicates
insufficient heap space and whether that number is (more or less) constant
across machines. (My gut feeling is that it should be.)



Messages mailing list …

Post generated using Mail2Forum (

On May 16, 2009, at 8:47 AM, Joel Uckelman wrote:

The real problem, and what makes this devilishly difficult, is that
the module will generally be for all practical purposes unusable well
before you get the OOME. Once memory gets really tight, Java will
end up spending so much time in GC, scavenging each little bit of
available memory, that the program effectively grinds to a halt. I
think this effect will occur well before (and with much worse
consequences) than the use of virtual memory.

AFAIK, using VM shouldn’t be too bad a solution for Mac OS X and
Linux. The Unix-based operating systems have efficient VM
implementations. So I wouldn’t really stress too much about using
VM. I think you will get better performance with reasonable memory
sizes on those systems even if it means you have to use virtual
memory. On Windows the situation may be different, since I’ve heard
that the VM implementation is not quite as nice. I don’t know if
this changed with Vista.

So, for a general solution, I would think that picking some
reasonable minimum values (such as a min RAM of 128M or 256M and a
max RAM of 1024M) would be the best and also simplest solution. If
one wants, the initial guess as to the upper limit could be dialed
down a bit on RAM-starved machines. But it should be possible to
increase the memory beyond what our heuristic suggests. So I
certainly wouldn’t make it a hard limit. If one wishes, a note that
the module requires a lot of RAM and that it may run a bit sluggishly
on a given user’s system could be issued. (They won’t read it, but
maybe it will make US feel better).

I think we’re spending too much effort trying to fix something that
doesn’t have an elegant solution. So I would go with something
simple that doesn’t get in the way of users increasing memory if needed.

Messages mailing list …

Post generated using Mail2Forum (

Thus spake Thomas Russ:

That’s not been my experience with how OOMEs happen in VASSAL. From
what I’ve seen, there tends to be a lot of free heap when an OOME
occurs, because an OOME can only happen after all of the SofReferences
in the image cache have been cleared. So you’ll have many, many MB
of empty heap, just less than you need to load a particular image.

This is what worries me. The OS most prone to problems is also the one
where our least savy users are concentrated. I suspect that without
some memory check, we’ll end up with a lot of Windows users who will
say “VASSAL crashed my computer!”

Can you think of a way that we could detect that a module requires a
lot of RAM?

There has to be a solution which, from the outside, would appear to do
the right thing almost all the time. If there weren’t, then I don’t know
how we’d be able to recommend heap sizes to users ourselves.


Messages mailing list …

Post generated using Mail2Forum (