Sunday, January 3, 2010
Getting glGetString To Return Something Useful
In OpenGL, glGetString() is the API to query the configuration of the system your code is running on, like the OpenGL version, or which OpenGL extensions are available.
However, if you call glGetString() before you have a current GL connection, no matter which configuration string you're querying, it will just return a NULL (nil) pointer.
If you're working in GLX, the solution is to call glXMakeCurrent() before calling glGetString(). That will open a current GL connection and you'll start getting strings back.
Unfortunately, most GLX tutorials and sample code either assume you know this, or use a utility library like GLUT that solves the problem for you without telling you how. After reading the man pages, this solution seems pretty obvious in retrospect. But as far as I can tell, it's only clearly spelled out in one place on the Net - until now. (That page also tells what to do on Windows.)
Saturday, September 19, 2009
Linux Builds Part II: The Acceleration Incantation
First, the CPU usage will jump up and down, and so will the disk activity - but you'll rarely see them both high at the same time. That's because the compiler typically operates in three phases on a source file:
- It reads the source file and all the headers. This is disk-intensive but not CPU-intensive.
- Then it does all the usual compiling stuff like lexical analysis and parsing and code generation and optimizing. This makes heavy use of the CPU and RAM, but doesn't hit the hard disk much.
- Then it writes the object file out to disk. Again, the disk is very busy, and the CPU just waits around.
The answer to this is parallel builds. Common build tools like make and jam offer command line options to compile multiple files in parallel, using separate compiler instances in separate processes. That way, if one compiler process is waiting for the disk, the Linux kernel will give the CPU to another compiler process that's waiting for the CPU. Even on a single-CPU, single-core computer, a parallel build will make better use of the system and speed things up.
Second, if you're running on a multi-CPU or multi-core system and not doing much else, even at its peak, CPU usage won't peg out at the top of the panel. That's because builds are typically sequential, so they only use one core in one CPU, and any other compute power you have is sitting idle. If you could make use of those other CPUs/cores, things would go faster. And again, the answer is parallel builds.
Fortunately, the major C/C++ build systems support parallel builds, including GNU make, jam, and SCons. In particular, GNU make and jam both offer the "-j X" parameter, where X is the number of parallel jobs to compile at the same time.

- When running with one compile at a time, sequentially, system resources are poorly utilized, so a build takes a long time.
- As the number of compiles running in parallel increases, the wall time for the build drops, until you hit a minimum. This level of parallelization provides the balanced utilization of CPU, disk, and memory we're looking for. We'll call this number of parallel compiles N.
- As the number of compiles passes N, the compile processes will increasingly contend for system resources and become blocked, so the build time will rise a bit.
- Then as the number of parallel compiles continues to rise, more and more of the compile processes will be blocked at any time, but roughly N of them will still be operating efficiently. So the build time will flatten out, and asymptotically approach some limit.
A Brief Aside On Significance
In any physical system, there's always some variation in measurements, and the same is true of computer benchmarks. So an important question in this kind of experimentation is: when you see a difference, is it meaningful or just noise?
To answer that, I ran parallelized benchmarks on Valentine (a two-core Sony laptop) and Godzilla (an eight-core Mac Pro). In each case, the Linux kernel was built twenty times with the same settings. Here are the results:
- Valentine, cached build, j=3. Average 335.91 seconds, standard deviation (sigma) 2.15, or 0.64% of the average.
- Valentine, non-cached build, j=3. Average 340.09 seconds, standard deviation 4.22, or 1.24% of the average.
- Godzilla, non-cached build, j=12. Average 67.82 seconds, standard deviation 0.54, or 0.79% of the average.
Linux Build Optimization I: The Need for Speed
If your compilation process takes more than a few seconds, getting the latest and greatest computer is going to save you time. If compiling takes even 15 seconds, programmers will get bored while the compiler runs and switch over to reading The Onion, which will suck them in and kill hours of productivity.

What can we do?
- Building the Linux 2.6.30.3 kernel using the default x86 configuration. (If you look at the various components used for the distcc benchmarks, the Linux kernel seems like a pretty representative large set of code.)
- Running under Ubuntu 9.04 using gcc version 4.3.3.
- The benchmark is the first thing done after rebooting the computer, with nothing but a couple of terminal windows running.
- The benchmark script, by default, avoids unrealistically fast builds due to disk caching from previous passes. (It does this by unpackaging the kernel tarball on every build pass.) It also has options to support disk caching, and allow adjustment of build options such as parallelization.
Major Factors That Determine Build Speed
- The CPU calculates things and makes decisions
- The hard drive reads and writes files
- Memory-based data is read from and written to RAM.
Sunday, May 24, 2009
Installing FogBugz on Ubuntu 9.04
- apache2
- php5
- php5-cli
- php5-imap
- php5-dev
- mysql-server
- mysql-client
- curl
- php5-mysql
- php-pear
- mono-gmcs
- mono-devel
- php5-curl
- mysql-query-browser
- mysql-admin
- Closed the web browser page that was telling me to call Fog Creek.
- Shut down mysql
- Did a cp -pr from the backup into my new installation
- Did a chown/chgrp of the copied files to mysql, and
- Restarted mysql
Wednesday, December 31, 2008
Curing Ubuntu's Black Screen of Death on a Mac Pro
- Boot from the Leopard install DVD
- Early in the OS X install process, use Disk Utility to create a small HFS Plus partition and a big partition I would later snuff for use by Ubuntu, then install Leopard.
- Install rEFIt.
- Use a GParted LiveCD to turn the partition from step 2 into free space.
- Install Ubuntu Intrepid from a LiveCD.
- From the Mac System Profiler, determine the manufacturer and details of your video card. (Mine was an ATI Radeon HD 2600 with 256 MB RAM.)
- Start the installation from the Ubuntu Alternate CD, which uses an old DOS-style text mode for its user interface. (If you use the LiveCD or LiveDVD, it will try to go into graphics mode and you're toast.)
- Zap that big partition you made as part of the installation process. You may also have to manually specify the size of your swap space. You can look up details on how to do this using the usual search engines.
- When you reboot at the end of your Ubuntu install, you'll get the black screen. Press command-control-F1 to switch to a command prompt. (The meta-keys may be different if you picked a non-Macintosh keyboard during install.)
- Use sudo apt-get to update everything, and reboot.
- When the screen goes black, use the key combo again to get a command prompt. Use sudo apt-get to install envyng-gtk. Then do sudo envyng -t.
- Pick the manufacturer of your video card from the envyng list, then reboot.
Wednesday, December 3, 2008
Triple-Boot Sony Vaio VGN-AR Notebook - Part II: How I Did It
I finally got my Sony Vaio VGN-AR290G laptop to triple-boot XP, Vista, and Ubuntu. Here's the general gist of how I did it.
This is not a detailed how-to - it's pretty general, and only includes the major steps but not the details. So if you decide to do this, you should read it fully first, and then look up any specific steps you're not sure about. You also need to be sure to read this article first.
- Make sure the hard drivers are not coupled as RAID. The instructions for this are in my previous post.
- Put a recent GParted boot CD in the CD-ROM drive and reboot. (I used version 0.3.9-4.) Use GParted to delete all partitions from both drives. Don't create any new ones. Exit GParted.
- On an existing XP installation, go to the AR290G support page and download the one file under the RAID heading - the file description is "Original - Intel® RAID Driver". You don't need anything else from there right now.
- Make an empty directory somewhere on the XP installation machine. Run the program you downloaded in step 4. It will want to install files onto a floppy, but change it to install the files into the directory you made. Those files are the driver files you'll slipstream.
- On the same existing XP installation, go to the nLite home page and download the nLite slipstreaming utility.Get an original Windows XP SP2 installation disk (I made mine from the MSDN Professional DVD). Use nLite per the instructions from the site I told you to read above to create a slipstreamed XP installer ISO image.
- Burn the ISO image onto a blank CD-ROM. This is your slipstreamed XP installer CD. I used a MacBook to do this; the Mac OS Disk Utility works great for this and comes for free with Mac OS. There are other utilities to do this on Windows (e.g. Nero) and Linux (e.g. Brasero) that will work fine as long as you have a compatible CD burner.
- Put the slipstreamed XP installer CD into the Vaio and reboot. It will eventually tell you there are no installable disk partitions, and give you the choice of formatting one. Format a partition on the first drive that only uses part of the drive, and install there. (My two drives are each about 93 GB, so I made this partition 40 GB). Finish the XP install, and reboot. Verify you can boot from your XP installation.
- Put a Vista installer CD or DVD into the Vaio and reboot it. (I used a Vista Ultimate DVD that came with an MSDN Professional subscription.) Tell it to install into the unformatted space on the first drive. (This is the leftover space on the first drive from the last step.) Finish the installation, and verify the Vista bootloader appears and lets you boot either from Vista or an "earlier" version of Windows (your XP installation). Verify you can boot into each Windows version.
- Put a Ubuntu 8.10 Alternate Install CD into the Vaio and reboot it. Tell it to install into the largest unformatted space, which will be the entire second drive. Finish the installation. In one of the last screens, confirm that grub should let you boot into either Ubuntu or the Vista bootloader.
- After removing the Ubuntu CD, verify you can boot into Ubuntu, Windows XP via the Vista bootloader, and Windows Vista via the Vista bootloader.
- Now you're triple-booting, but there's one more important step. On a computer other than the Vaio (the XP system from step 3 will work fine), go back to the AR290G support page and download installers for video and networking drivers, and any other drivers or programs you think you'll need. Copy them onto a USB flash drive, then boot into XP on your Vaio, attach the USB flash drive, and install the drivers and programs you downloaded.
You're done!
Questions and Answers
Q: Why did you decouple the RAID?
A: I had tried this with RAID enabled, and was able to get either XP or Ubuntu working, but not both at the same time. Also, my main interest in this machine is doing compiles. I ran some benchmarks, and saw absolutely zero benefit to using RAID.
Q: What? Using RAID provided no benefit?
A: Yes. I set the system up as RAID 0, installed Ubuntu, and compiled gcc 4.3.2. Then I decoupled the drives, re-installed Ubuntu, and compiled gcc 4.3.2. And I got the following results:
- Extract (tar xf) the gcc tarball: RAID0 2.5s, No RAID 10s
- ./configure: RAID0 5s, No RAID 6s
- make (sequential compile): RAID0 1h 26m, No RAID 1h 26m
- make -j 8 (parallel compile): RAID0 13m 22s, No RAID 12m 50s
Q: That's weird. Why do you think RAID doesn't accelerate compiles?
A: Because gcc is not disk-bound, it's CPU-bound. On top of that, I believe the RAID in the Vaio is "fake RAID", which uses the CPU to implement some of the RAID capabilities. So when you do parallel compiles and really beat up the CPU, the extra CPU overhead to implement the RAID actually slows down the compile. (It's possible other compilers, such as Visual Studio, might benefit from a RAID 0 configuration, but that's just a guess.)
Q: Why did you use GParted and Ubuntu Alternate Install?
A: I had them handy. I suspect that I could have used the Ubuntu LiveCD. Then in step 3 I would have booted from the LiveCD and used the Partition Editor, and in step 12 I would have installed from the LiveCD. But I don't know for sure this will work.
Q: Why did you install all those drivers into XP at step 13? Why not slipstream them in step 8?
A: I couldn't figure out an easy way to slipstream them, and this way works fine.
Tuesday, November 18, 2008
Triple-Boot Sony Vaio VGN-AR Notebook - Part I: RAID Uncoupling
So far, not a problem - but here's the kicker: it also needs a PCMCIA Type II (CardBus) slot. I occasionally code for a dated interface which, for laptops, is only available in PCMCIA. Unfortunately, PCMCIA has almost entirely been replaced by ExpressCard. As far as I can tell, in late 2008 you can get it on a few expensive Dells, a few of Acer's low-end Extensas - and Sony's Vaio VGN-AR series.
These Vaios had a lot of other things I liked, including a nice big 17" screen, and dual SATA drives. Woohoo - different drives for different OS's!
So I got a lightly used VGN-AR290G from eBay, and then found something out...
Those dual hard drives? They're hooked up to an Intel hardware RAID controller, and by default are configured as RAID 1. The upshot?
- You can install Vista.
- You can also install the custom, pre-slipstreamed version of XP Media Center Edition that comes with it - but that's not what I need for development
- You can't install plain old XP Pro, because it doesn't see any disk drives.
- Ubuntu sees both disk drives, and appears to install, but doesn't boot. That's because Ubuntu attempts to install as non-RAID, and then when you boot, the GRUB bootloader is all confused.
And that, as usual, is easier said than done. But to save you the time I spent figuring it out, here's the recipe:
- Reboot the laptop.
- When the Sony logo comes on the screen, hold down the F2 key. You can let up when the laptop starts beeping at you. After a few seconds, this will bring you into the BIOS utility.
- Go to BIOS utility's Advanced tab and switch RAID Configuration to Show (if it's not set to that already). Then save the BIOS configuration and exit. The laptop should now reboot.
- You should now see more boot messages, and eventually get to a screen that shows you the disk configuration. During the few seconds this is up, hold down the Ctrl and I keys at the same time. This should switch the computer into the RAID editor utility.
- In the RAID utility, from main menu, choose the option to Reset Disks to Non-RAID. Save the configuration and exit.
My guess is that I'll have to slipstream the RAID driver onto the XP Pro install CD, but that's a job (and a post) for another day.