Tuesday, August 16, 2011

Making Mac Installers

Apple's PackageMaker is buggy, buggy, buggy. Also poorly documented. And buggy.

PackageMaker is Cupertino's preferred tool for making installers for Mac software, and comes in two flavors: the PackageMaker GUI tool, and the packagemaker command-line tool.

Having used many awkward installer tools on Mac and Windows, I wasn't expecting a bed of roses. But what I got, others have already described eloquently here, and here, or here, and many other places on StackOverflow and Apple's installer-dev list.

In short, the PackageMaker GUI is the worst software I've seen come out of Apple since I began programming for the Mac in 1984. It crashes, it creates installers that crash, and it may take values you set and silently reset them to their defaults.

The command-line tool is much better, and I even used it for a while, but if you need more than a simple one-item install, it has scant documentation, and a steep learning curve requiring time-consuming experimentation.

What To Do?

Here's what I'm currently using as effective ways to make Mac installers:
  • One-off installers. These are installers which you will build once and never have to build again. In this case, the PackageMaker GUI will probably be fine if you're willing to tolerate the occasional crash. However, if you move your PackageMaker project to a different directory or a different Mac, it will silently reset options in the project and you'll get all kinds of unpleasant surprises come time to run installers you build from that project.
  • Installing one executable, and building the installer from the command line. This also includes building the installer as a shell command inside Xcode. I've done this, and the packagemaker command-line tool works fine. I pretty much just got all the info from Christopher Sexton's Codeography post, particularly the command at the end preceded by "Run the package maker build on the command line".
  • Everything else. Use one of the two excellent tools Stéphane Sudre has written. Iceberg is aimed at old-style "bundle" installers, and is what I'm currently using. Packages is aimed at newer "flat" installers - I haven't tried it as of this writing, but I have high hopes for it.
I hope that by the time you read this, Apple will have turned PackageMaker into a well-documented, stable system. But if you have troubles with PackageMaker at all, you can save yourself a lot of stress by immediately looking into alternatives.

Sunday, October 17, 2010

Redmine: Creating a new custom query

As I mentioned in the last post, Redmine's documentation isn't very thorough. Here's how to create a new custom query in version 1.0.
  1. Log into your project.
  2. Select the Issues tab.
  3. Open the disclosure triangle for Filters, and set the status and any other filters you want.
  4. Open the disclosure triangle for Options, and set the columns and grouping you want.
  5. Click on the Apply button above the results table.
  6. If you aren't satisfied with what you see, change the filters/columns/grouping, and press the Apply button again.
  7. When you're satisfied with what you have, press the Save button above the results table. This will take you to the New query page.
  8. Enter a name for the query in the Name field. If you like, you can make the query public, and make it available either for the current project or all projects.
  9. Make any changes you want to the sort order. (You can also change the grouping, filters and columns, but presumably you set those up the way you wanted before.
  10. Press the Save button at the bottom of the page.

Subtasks: Dumping FogBugz for Redmine

As you'll note in an earlier post, my wife and I had been using FogBugz for issue tracking – but I was dissatisfied with it. As I remarked in that post, the installation instructions and general support for any server platform other than Windows are mediocre. And in version 6 (which we were running), the wiki didn't support Safari.

But what was really annoying was that FogBugz 6 didn't support subtasks.

I personally find subtasks incredibly useful for organizing work! I like to take a big task – such as a major feature that may take a week or two – and break it down into sub-tasks, and sub-sub-tasks, and so on, to a level of granularity where each bottom-level task is clear and easy to complete in a few hours or less. Then I can just do them, and tick them off, and when all the lower-level tasks are done, the top-level task is done.

So when I upgraded our version-control and issue tracking server to Ubuntu Lucid, I thought, "I can get subtasks by upgrading FogBugz to version 7. Or I can look around and see if there are any alternatives."

But when I started looking around, I was amazed to see that, despite how many requests you'll see for subtask support, few issue tracking systems do it well! Trac doesn't do it at all. And as I found out when working at Kno, Jira only offers one level of subtasks – which isn't enough for me as a lone developer, let alone for what we were trying to do at Kno.

It was particularly astonishing to me because subtasks are a tree, and for a competent computer scientist, trees should be trivial. All I can figure is that the people who originally designed these systems didn't even consider the need for hierarchy, and then found that extending it was difficult. And even that I find odd, because most of them are based on relational databases, and adding a "parent task" column to the "task" table shouldn't be hard. So they must have put some unusual roadblocks in their own way.

And then, like the sun rising after a storm, I found... Redmine. It's open-source (e.g. free as in beer). It has unlimited levels of sub-tasks. It has custom fields. It has a wiki that works with every browser I've tried it with. It has forums. It's easier to install on Ubuntu than FogBugz – even from source (which I did). I just love it.

Well, except that (as of Oct. 1o, 2010) its documentation is very incomplete, and some operations can only be done in a clunky way. But at least you can do them! And I'll explain one thing I figured out how to do in the next post.

Sunday, January 3, 2010

Getting glGetString To Return Something Useful

Here's a small-but-useful factoid.

In OpenGL, glGetString() is the API to query the configuration of the system your code is running on, like the OpenGL version, or which OpenGL extensions are available.

However, if you call glGetString() before you have a current GL connection, no matter which configuration string you're querying, it will just return a NULL (nil) pointer.

If you're working in GLX, the solution is to call glXMakeCurrent() before calling glGetString(). That will open a current GL connection and you'll start getting strings back.

Unfortunately, most GLX tutorials and sample code either assume you know this, or use a utility library like GLUT that solves the problem for you without telling you how. After reading the man pages, this solution seems pretty obvious in retrospect. But as far as I can tell, it's only clearly spelled out in one place on the Net - until now. (That page also tells what to do on Windows.)

Saturday, September 19, 2009

Linux Builds Part II: The Acceleration Incantation

Ubuntu offers a system monitor that can show graphs of how system resources are being used. It's very interesting to turn on all the graphs on and build a large project without fiddling with the build. You'll see some interesting things.

First, the CPU usage will jump up and down, and so will the disk activity - but you'll rarely see them both high at the same time. That's because the compiler typically operates in three phases on a source file:
  1. It reads the source file and all the headers. This is disk-intensive but not CPU-intensive.
  2. Then it does all the usual compiling stuff like lexical analysis and parsing and code generation and optimizing. This makes heavy use of the CPU and RAM, but doesn't hit the hard disk much.
  3. Then it writes the object file out to disk. Again, the disk is very busy, and the CPU just waits around.
So at any one time, the compiler is making good use of the CPU or the disk, but not both. If you could keep them both busy, things would go faster.

The answer to this is parallel builds. Common build tools like make and jam offer command line options to compile multiple files in parallel, using separate compiler instances in separate processes. That way, if one compiler process is waiting for the disk, the Linux kernel will give the CPU to another compiler process that's waiting for the CPU. Even on a single-CPU, single-core computer, a parallel build will make better use of the system and speed things up.

Second, if you're running on a multi-CPU or multi-core system and not doing much else, even at its peak, CPU usage won't peg out at the top of the panel. That's because builds are typically sequential, so they only use one core in one CPU, and any other compute power you have is sitting idle. If you could make use of those other CPUs/cores, things would go faster. And again, the answer is parallel builds.

Fortunately, the major C/C++ build systems support parallel builds, including GNU make, jam, and SCons. In particular, GNU make and jam both offer the "-j X" parameter, where X is the number of parallel jobs to compile at the same time.
The graph above shows what I would generally expect the results of parallel builds to be on a particular hardware configuration, going from left to right.
  • When running with one compile at a time, sequentially, system resources are poorly utilized, so a build takes a long time.
  • As the number of compiles running in parallel increases, the wall time for the build drops, until you hit a minimum. This level of parallelization provides the balanced utilization of CPU, disk, and memory we're looking for. We'll call this number of parallel compiles N.
  • As the number of compiles passes N, the compile processes will increasingly contend for system resources and become blocked, so the build time will rise a bit.
  • Then as the number of parallel compiles continues to rise, more and more of the compile processes will be blocked at any time, but roughly N of them will still be operating efficiently. So the build time will flatten out, and asymptotically approach some limit.
Anticipating further posts, that is what you actually see, except the rise after the minimum is tiny, often to the point where the times in the flat tail are only a tiny bit higher than the minimum time.

A Brief Aside On Significance

In any physical system, there's always some variation in measurements, and the same is true of computer benchmarks. So an important question in this kind of experimentation is: when you see a difference, is it meaningful or just noise?

To answer that, I ran parallelized benchmarks on Valentine (a two-core Sony laptop) and Godzilla (an eight-core Mac Pro). In each case, the Linux kernel was built twenty times with the same settings. Here are the results:
  • Valentine, cached build, j=3. Average 335.91 seconds, standard deviation (sigma) 2.15, or 0.64% of the average.
  • Valentine, non-cached build, j=3. Average 340.09 seconds, standard deviation 4.22, or 1.24% of the average.
  • Godzilla, non-cached build, j=12. Average 67.82 seconds, standard deviation 0.54, or 0.79% of the average.
Generally speaking, a difference of one sigma or less is probably not significant, while a difference of two sigma or more is probably significant. So I'll generally use the rule of thumb, based on the above, that differences between individual values of 2% or less are probably not significant and may easily be due to experimental error (noise).

Linux Build Optimization I: The Need for Speed

gcc may be many things, but it most certainly is slow. Anybody who's worked with a really fast C/C++ compiler - like the late, lamented Metrowerks Codewarrior for MacOS and Windows - will be happy to tell you that.

But build speed matters a lot! As Joel Spolsky points out:
If your compilation process takes more than a few seconds, getting the latest and greatest computer is going to save you time. If compiling takes even 15 seconds, programmers will get bored while the compiler runs and switch over to reading The Onion, which will suck them in and kill hours of productivity.
I currently work with a team on a Linux-based system involving about twenty million lines of C/C++. Building everything using the default settings on our normal development hardware takes hours. So if someone changes a file down in the bowels of some low-level component half the system relies on, here's how we spend our time:



What can we do?

Here are some options for speeding up the build cycle. I think they would all be great if everybody could do them, but only one of them is universally applicable:

Change to a faster compiler that will do the job
That would be lovely if there were one - but unfortunately, the easily-available option I know of for replacing gcc is the combination of LLVM and clang. I have high hopes for that compiler system someday, but at present, real-world benchmarks don't indicate it's hugely faster at compiling than gcc, and clang doesn't support critical C++ features many programmers need.

Throw money at hardware
This would also be lovely if everyone were in a position to do that. But if you just don't have much spare money, this is a non-starter. And even if you do have a fair bit of change to spend on hardware, what I plan to discuss will still help you improve your use of it.

Be clever
Now we get to the meat of these posts: taking your existing gcc compiler, and your existing hardware, and tweaking things so that the build cycle is quicker. The best case would be without spending a dime, and the worst case would involve spending very little money.


The Ground Rules

These posts will largely consist of a series of experiments. Each experiment will involve applying a technique that might accelerate builds, benchmarking it, and analyzing the results.

Unless otherwise specified, the tests will involve:
  • Building the Linux 2.6.30.3 kernel using the default x86 configuration. (If you look at the various components used for the distcc benchmarks, the Linux kernel seems like a pretty representative large set of code.)
  • Running under Ubuntu 9.04 using gcc version 4.3.3.
  • The benchmark is the first thing done after rebooting the computer, with nothing but a couple of terminal windows running.
  • The benchmark script, by default, avoids unrealistically fast builds due to disk caching from previous passes. (It does this by unpackaging the kernel tarball on every build pass.) It also has options to support disk caching, and allow adjustment of build options such as parallelization.
The results are all based on "wall time" - what you'd see on a clock on the wall - because I mainly care about not wasting my time waiting for builds.


Major Factors That Determine Build Speed

When you're doing a build, the main things that happen are:
  • The CPU calculates things and makes decisions
  • The hard drive reads and writes files
  • Memory-based data is read from and written to RAM.
The fastest build you could do on a particular set of hardware would strike a balance among the CPU, hard drive, and RAM, so that all of them are constantly busy, with none of them ever waiting any of the others.

Realistically, you will never achieve that 100% utilization on all three components. Even if you adjusted the system so that file foo.c would compile at 100% utilization on all three, if file bar.c used more or larger header files, compiling it might not achieve 100% CPU utilization because the system would spend more time reading header files from the disk, and the CPU would have to wait for that. Also, settings that are optimal for the compiler might not be optimal for the linker. So all we can aim at is an overall good build time.

There are other resources one can apply to compiles which I also plan to address - in particular, underutilized CPU horsepower out on the network via distcc.

So on to the next post... and I would love to get feedback and suggestions!

Sunday, May 24, 2009

Installing FogBugz on Ubuntu 9.04

Joel Spolsky is a notable netizen in the technology industry for a variety of reasons, including his blog, Joel on Software, his articles for Inc. magazine, his speeches at many industry conferences, and co-founding the Stack Overflow programmers' website.

Joel's day job is as CEO of Fog Creek Software, and Fog Creek's flagship product is FogBugz, an inexpensive web-based issue tracking system with some other nice features like an integrated wiki.

When my wife and I switched from Mac OS 9 to Mac OS X for day-to-day productivity work, we'd been using Seapine's TestTrack for bug tracking, but it became less and less viable for us. So she did a search for alternatives, and liked FogBugz the best. The commercial bug tracking systems were pretty expensive, and there weren't any open-source equivalents with good documentation at the time.

FogBugz is still quite reasonable for a small team that doesn't have IT staff: it's pretty cheap, and pretty good, and pretty functional, and pretty bug-free, and doesn't take a lot of administration or maintenance. Some of the open-source alternatives like Trac have probably caught up with it on features and ease of installation and administration. But FogBugz costs only $36.50 per programmer per year for a maintenance contract, so it' s not worth it to us to switch.

However, in my opinion, FogBugz has one big flaw. While it runs on Windows, Mac OS X, Linux, and Unix servers, the FogBugz documentation and support is heavily Windows-oriented. If you want to install on a non-Windows platform, Fog Creek's instructions are decidedly not turnkey and not updated even yearly. Their tech support people are nice, and happy to transfer licenses or point you to hard-to-find documentation URLs, but if you're having an unusual problem on a non-Windows platform that their documentation doesn't cover, they are of limited help.

Anyway, I decided to upgrade our FogBugz server from an older version of Ubuntu to 9.04 (Jaunty Jackalope), and did so by wiping the hard drive and re-installing everything, including FogBugz. So here are the general steps, after you've installed Ubuntu. This also is not exactly turnkey - you should be a reasonably knowledgeable Ubuntu user and know when to sudo things, for instance - but it should help keep you from running into roadblocks.


Review a few URLs

You should at least skim these, and may want to print them out.

Here are some Fog Creek pages, but don't take them for gospel truth. I'm writing this post to correct and expand upon them: Getting Your Unix Server Ready For FogBugz and Unix System Requirements.

You should also look at ApacheMySQLPHP on the Ubuntu site, which has some slightly dated background on other components you'll need to install and configure.


Install a LAMP stack

The FogBugz documentation's list of packages to install is very dated and incomplete. You don't need to install mono because it's included with Ubuntu 9.04 desktop. But you need to install a lot of other packages so that the Apache, MySQL, and PHP parts of a LAMP stack will work correctly with FogBugz. Here's what I eventually wound up installing via Synaptic:
  • apache2
  • php5
  • php5-cli
  • php5-imap
  • php5-dev
  • mysql-server
  • mysql-client
  • curl
  • php5-mysql
  • php-pear
  • mono-gmcs
  • mono-devel
  • php5-curl
When you install mysql, you'll have to give it an administrator account name and password. Remember these! You'll need them later.

And then these two are just handy:
  • mysql-query-browser
  • mysql-admin

Configure Networking, Apache, and PHP

If your server has a static IP address, edit /etc/hosts, and make sure your local and fully-qualified machine names (both foo and foo.example.com) are associated with that IP address.

Edit /etc/apache2/httpd.conf and add a server name line:
ServerName foo.example.com

Make sure all the PHP modules are enabled, and then restart the Apache web server:
sudo a2enmod php5
sudo /etc/init.d/apache2 restart

Set up a test page for your PHP extensions: edit /var/www/test.php and fill it with the PHP test info from here. Then open http://localhost/test.php in Firefox and make sure the XML, imap, mysql, and iconv lines all have a 1 at the end.


Install eAccelerator

eAccelerator is a caching system for PHP that FogBugz highly recommends. I do, too - when you're working with the FogBugz database from a client machine over a network, if you don't have eAccelerator installed, you'll be going on a lot of coffee breaks.

Unfortunately, Ubuntu doesn't supply an eAccelerator package, so you have to build it from sources. The official page on how to do this is here, but I didn't find it very helpful on Ubuntu. This page is a lot more accurate and detailed for Ubuntu installation.


Configure MySQL

First, you'll need to get a MySQL prompt. You did write down your administrator name and password above, right?
mysql -u your_administrator_name  -p
and enter the password when prompted.

Then, you'll need to follow the instructions here.

I populated my FogBugz database by a simple directory copy of an old version, so I didn't run into this today. But from an earlier installation, I knew that FogBugz is not compatible with the most recent MySQL password scheme. That means if you're doing an initial installation or you're populating the database via export/import, you'll have to follow some further instructions to tell MySQL to use an older password scheme.


Install FogBugz

Download and unpackage the FogBugz tarball, and follow the Unix Setup Steps instructions. Unlike the instructions, I ran install.sh as superuser. When you run the install, it will ask if you want to install various Pear files; just type "y" for all of them.

Eventually, you'll get to a web-based FogBugz configuration screen. You still do remember that MySQL administrator account information, right? Here's what I used:
Server: localhost
...
Database name: fogbugz
FogBugz user account: fogbugz

Then the web screen will ask you for your Fog Creek order number and email address, and try to validate it with the Fog Creek license server. If you're doing a new installation, after this, you should be up and running.

I was transferring the database and license from a previous installation. That confused the Fog Creek license server and it wanted me to call in to get my installation count incremented. However, I had done a straight backup of /var/lib/mysql/fogbugz on the older installation, so I:
  1. Closed the web browser page that was telling me to call Fog Creek.
  2. Shut down mysql
  3. Did a cp -pr from the backup into my new installation
  4. Did a chown/chgrp of the copied files to mysql, and
  5. Restarted mysql
And at that point, FogBugz was back in operation.

It all took a few hours, and would certainly have been a lot quicker if I'd had this post!

Update (16 July 2009)

David Llopis remarked in the comments: I think if you install "apache2", Ubuntu defaults to installing "apache2-mpm-worker" rather than the "apache2-mpm-prefork" that you should use.

You do need prefork for PHP to run correctly, and David is correct if you do something like just go into Synaptic and select the "apache2" package and hit the "Apply" button.. However, if you don't hit the "Apply" button right away, and select the "php5" package, that will deselect the worker package and select the prefork package. Anyway, you should definitely double-check that you're installing prefork, particularly if you're installing in a different Ubuntu version than 9.04.

Also, Chris Lamb posted instructions on installing FogBugz into Debian Lenny; they contain a few configuration tweaks that might be worth a look.