I’ve converted the Python demo I made a few weeks ago to C++. I’m posting a link to an archive containing the source code here for anyone who wants to try it or have a look. The dependencies are only OpenGL, GLUT, and libjpeg. I haven’t had time to do a Windows build yet; I intend to try that this evening. I’m assuming that anyone on Linux will be able to compile this with no problems; hopefully this is also the case on Macs.
Fast. The parts which are in software (as opposed to in the graphics hardware) are much, much faster than in Python. This is unsurprising, but nice to see. The only siginificant thing which happens in software in the demos is image loading. The map image I’ve been using for testing is 8149x3514. On my laptop, it takes about 5 seconds to load this image in Python; in C++, libjpeg loads it in under one second.
Very low RAM usage. Image data takes up close to 100% of the memory the demo uses. (This is similar to VASSAL.) The test map is about 81MB in memory. However, once we’ve carved it into texture tiles and handed those off to OpenGL, we can free all the memory we allocated to store image data—OpenGL stores textures in video RAM. The demo was using about 7MB RAM for me after image loading was finished; the max was around 150MB (for a few seconds) during tile carving.
Little code. The C++ demo is about 1200 lines, which is approximately twice the number that the Python demo has. Half of the extra “code” comes from lines which are blank or have only a curly brace on them (332 lines); the rest comes from the code which gets data from libjpeg and from managing OpenGL handles properly rather than leaking them.
The bad:
I’m not seeing any bad, yet, actually.
The ugly:
Building for multiple platforms. I can (probably) make Windows builds on Linux using the mingw cross-compiler. I’ve done that once before, for mkhexgrid. I have no idea whether I can cross-compile on Linux for Mac OS. I’d very much like to be able to build for all platforms in once place.
I’ve converted the Python demo I made a few weeks ago to C++. I’m
posting a link to an archive containing the source code here for anyone
who wants to try it or have a look. The dependencies are only OpenGL,
GLUT, and libjpeg. I haven’t had time to do a Windows build yet; I
intend to try that this evening. I’m assuming that anyone on Linux will
be able to compile this with no problems; hopefully this is also the
case on Macs.
Fast. The parts which are in software (as opposed to in the graphics
hardware) are much, much faster than in Python. This is unsurprising,
but nice to see. The only siginificant thing which happens in software
in the demos is image loading. The map image I’ve been using for testing
is 8149x3514. On my laptop, it takes about 5 seconds to load this image
in Python; in C++, libjpeg loads it in under one second.
Very low RAM usage. Image data takes up close to 100% of the memory
the demo uses. (This is similar to VASSAL.) The test map is about 81MB
in memory. However, once we’ve carved it into texture tiles and handed
those off to OpenGL, we can free all the memory we allocated to store
image data—OpenGL stores textures in video RAM. The demo was using
about 7MB RAM for me after image loading was finished; the max was
around 150MB (for a few seconds) during tile carving.
Little code. The C++ demo is about 1200 lines, which is approximately
twice the number that the Python demo has. Half of the extra “code”
comes from lines which are blank or have only a curly brace on them (332
lines); the rest comes from the code which gets data from libjpeg and
from managing OpenGL handles properly rather than leaking them.
The bad:
I’m not seeing any bad, yet, actually.
The ugly:
Building for multiple platforms. I can (probably) make Windows builds
on Linux using the mingw cross-compiler. I’ve done that once before, for
mkhexgrid. I have no idea whether I can cross-compile on Linux for Mac
OS. I’d very much like to be able to build for all platforms in once
place.
Oops. Yes, it works. However, strangely it shifts to all blue whenever I
pan the map. But when I move a counter it goes back to normal. For some
reason the R and G channels are getting zeroed out. Initially, I thought it
was because my GPU didn’t support the encoding scheme you have, but it’s
switching back and forth. I’ve never seen that before.
–
Michael Kiefte, Ph.D.
Associate Professor
School of Human Communication Disorders
Dalhousie University
Halifax, Nova Scotia, Canada
tel: +1 902 494 5150
fax: +1 902 494 5151
Oops. Yes, it works. However, strangely it shifts to all blue whenever I
pan the map. But when I move a counter it goes back to normal. For some
reason the R and G channels are getting zeroed out. Initially, I thought it
was because my GPU didn’t support the encoding scheme you have, but it’s
switching back and forth. I’ve never seen that before.
That’s strange. The encoding scheme I’m using is GL_RGB (no alpha, because
the source images are JPEGs). I wonder if converting to GL_RGBA would make
any difference?
That’s strange. The encoding scheme I’m using is GL_RGB (no alpha, because
the source images are JPEGs). I wonder if converting to GL_RGBA would make
any difference?
Actually, I think it does! I seem to recall that some graphics cards have
problems with 24bit alignment. That brings back some recollections.
That’s strange. The encoding scheme I’m using is GL_RGB (no alpha, because
the source images are JPEGs). I wonder if converting to GL_RGBA would make
any difference?
Actually, I think it does! I seem to recall that some graphics cards have
problems with 24bit alignment. That brings back some recollections.
Aha. I will adjust that this evening, then. RGB is how it comes from
libjpeg, at least by default. I’ll see whether it’s possible to get
libjpeg to give us 32-bit aligned output instead. If not, I can just
twiddle the scanlines as they come.
Michael, here’s a diff to apply to load_jpeg.cpp which will modify it (in an ugly way) to load JPEGs to to RGBA instead of RGB format. If you recompile with this, does your “blue” problem go away?
If you’re compiling on and for Linux, you’ll most likely already
have both OpenGL and GLUT. If you’re compiling on Windows, I’m not
exactly sure where you get the headers you need; I haven’t had
time to look yet. I don’t have access to a Mac, but I suspect that
the Mac situation is similar to Linux.
I tried to use Visual Studio 2010 C++ Express to see how far I can go. I was able to compile all the way but whenever I ran it, I would run into a crash at the following line inside map.cpp:
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
I got the GLee SDK from opengl.org/sdk/libs/GLee/ and recompiled it with VS2010. I got the Win32 version of freeglut. I even recompiled its source but the outcome was the same. I got the jpeglib versions 8c and 6b but both got me the same error. I am curious to see if you also run into the same error with MinGW.
I tried to use Visual Studio 2010 C++ Express to see how far I can go.
I was able to compile all the way but whenever I ran it, I would run
into a crash at the following line inside map.cpp:
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
Any idea why that would cause a crash? The vertex VBO has 4 vertices in
it, so I don’t think the problem is that we’re overrunning the end of
the VBO.
I got the GLee SDK from opengl.org/sdk/libs/GLee/[2] and
recompiled it with VS2010. I got the Win32 version of freeglut. I even
recompiled its source but the outcome was the same. I got the jpeglib
versions 8c and 6b but both got me the same error. I am curious to see
if you also run into the same error with MinGW.
I haven’t been able to successfully compile with MinGW yet, due to my
MinGW headers not having all the extension definitions that my Linux
ones do. I’m just about to try GLEW (glew.sourceforge.net/) for
handling that, so I’ll let you know how it turns out.
I haven’t been able to successfully compile with MinGW yet, due to my
MinGW headers not having all the extension definitions that my Linux
ones do. I’m just about to try GLEW (glew.sourceforge.net/) for
handling that, so I’ll let you know how it turns out.
Huh. On switching to GLEW, I’m now getting a segfault on the first
glGenBuffersARB() call. No idea why.
I haven’t been able to successfully compile with MinGW yet, due to my
MinGW headers not having all the extension definitions that my Linux
ones do. I’m just about to try GLEW (glew.sourceforge.net/) for
handling that, so I’ll let you know how it turns out.
Huh. On switching to GLEW, I’m now getting a segfault on the first
glGenBuffersARB() call. No idea why.
Aha, got it. I was neglecting to initialize GLEW before the call to
glGenBuffers().