I’ve been using a lot of very large maps in VASSAL as ADC2 boards tend to be on the large side and I’ve been looking at ways to speed up image loading. There seems to be a couple of bottle necks that I don’t know how to solve, so I’m wondering if any of you (particularly Joel, of course), new the answers to these.
ImageUtils.isLargeImage returns true if the image takes up more than 1 MB. So a map is considered large if it’s bigger than 512x512 pixels square. That strikes me as very small for a map. Map rendering is much, much faster when this is increased substantially. Right now on my laptop copy, it just returns false which seems to be the only way to change its behaviour.
VASSAL really starts to crawl when it gets to toIntARGBLarge. What I would like to do for the ADC2 importer is save the map in a format that is already ARGB, but it appears that PNG is typically interpreted with format CUSTOM. Is there a format that I can save to that would load automatically as ARGB? It also occurs to me that all maps should be saved in a format that already is ARGB since the files are zipped anyway. Any solutions here?
By the way, I’m trying to do this without a profiler. Brent, can you help me out here? Sent a private message on the forum – may not have gotten it.
isLargeImage() is used to determine whether an image should be put out to
a memory-mapped file when memory-mapped files are turned on. There’s little
benefit for doing this with “Small” images, so those go into RAM anyway. I
chose 1MB arbitrarily—possibly this should be significantly higher.
How much did you increase the cutoff when you were experimenting with it?
I messed with this for days to try to get ImageIO to read directly to an
ARGB image instead of a CUSTOM, with no luck. If you can find a way to
do this, I’d appreciate it.
I might be a bad example. I don’t know if ADC2 maps are always
significantly larger than those of VASSAL, but I don’t think I’ve
imported a single one that was less than 2000x2000 pixels–and that
would be a small one. Many of those maps are made up of discrete
elements, however, so memory is not an issue for a lot of these. The
newer modules do use scanned images and ADC2 is quite responsive,
I’m just returning false so I don’t have to think about it. Good idea
for a preference. Something along the lines of “My computer is a
I’ve had no luck, either. All the ImageReaders I looked at return a
CUSTOM format which basically means indexed apparently (are PNG files
really indexed?) and JPG is not a sensible option. The other option
is to subclass an ImageReader for some really basic ARGB format. How
much would we lose by saving a raw image to a ZIP-file?
It already is. You can turn off memory-mapped files.
PNG supports several different color models. If your image contains few
enough colors (<= 256) that it can be indexed in one byte, then it will
be dramatically smaller saving it indexed than as ARGB tuples, so most
reasonable programs which create PNGs will do that. So, yes, those PNGs
which can be indexed often are.
I did an experiment a while back to see how well reading raw images from
disk worked. The result was that disk I/O is ridiculously slow. Reading
compressed raw images would doubtless be much better, but either way we
end up creating an image in the archive which isn’t readable by any other
program, which I think disqualifies that as a solution.
You really, really don’t want to try subclassing ImageReader. There are
way too many different kinds of PNGs you’d need to handle. Then again,
you did reverse-engineer the ADC2 format, so I guess that means you’re
a masochist, and you might enjoy it. Heh. But even if you did succeed at
getting an ImageReader to read directly to a TYPE_ARGB BufferedImage,
you’d have the following problem: Your ImageReader would not be able to
do that (efficiently) without having a reference to the WritableRaster
in the BufferedImage. Once you get a reference to an image’s WritableRaster,
it is impossible for that image to have automatic hardware acceleration.
So, then you’re stuck copying the image you read in to a fresh image
for which you have no reference to the WritableRaster in order to get
decent drawing performance. This sucks, but there is no way to untaint
an image once you have it’s WritableRaster.
As it is, you need to do a copy of any image you get from ImageIO anyway,
because they’ll often be of TYPE_CUSTOM. (Generally, you want to copy to
a type which is either the one you get from the default GraphicsContext,
or one which is rapidly convertible to it, as that will give you the best
speed when painting.)
I took apart all sorts of things. Reverse-engineering a binary file
format is still not my idea of a good time.
Ok. I’m looking forward to what you find. I’m not really satisfied with
performance right now, but I also don’t see where we can be more efficient
anymore (without using JOGL). The last time I profiled image loading, the
process time was split between a method burried deep within ImageIO that was
being called millions of times—once for each pixel, as it turned out—and
the image type conversion.
Cool. I work on clearing algorigthms for combinatorial auctions, which I
find to be inscrutable enough that I don’t need something that complex
in my spare time. Heh.
Yes. We can’t use it because the goofy license it’s under (JRL) is
LGPL-incompatible. It might be worth testing it to see how fast it is,
and if it’s signifcantly faster at loading images, take a look at what
it is they’re doing.
Yeah. It’s at though Sun deliberately set out to make JAI useless for
Hold on, though. I see that while JAI itself is JRL, jai-imageio
is BSD, so if jai-imageio can be used independently of jai, then
we could use it.
Turning off memory-mapping due to an OOME is going to be the opposite of
what you want—that would result in trying to put more in to RAM after
an OOME, not less, which is just going to cause you another OOME.
If you mean turning on memory-mapping due to an OOME, that’s also going
to be expensive, because having an OOME will cause a portion of the
image cache to clear. Suppose that you load 200MB of images, and then
you get an OOME as a result of trying to allocate another large image.
That will probably have caused the cache to dump your previously-loaded
images, so now you have to endure loading them again later after switching
to memory-mapped files.