FlightGear build server: Difference between revisions

From FlightGear wiki
Jump to navigation Jump to search
m (→‎Intro: Updates: http://www.mail-archive.com/flightgear-devel@lists.sourceforge.net/msg28260.html)
Line 40: Line 40:
For any build, Hudson uses the Git changelogs to report what (and by whom!) is  new in the build.
For any build, Hudson uses the Git changelogs to report what (and by whom!) is  new in the build.
Currently the master is being used to do the Linux builds, because it was easy - no particular reason it has to be done that way, though.
Currently the master is being used to do the Linux builds, because it was easy - no particular reason it has to be done that way, though.
It does chew a bit of disk-space, since the master stores the artifacts for the last N builds, where N is configurable. The artifacts are a hundred megabytes or so, since it's all the header files, libs and binaries, though compressed of course.


= Goals =
= Goals =

Revision as of 11:20, 2 July 2010

Intro

A *prototype* of a Hudson based build server for building FG (including OSG and SimGear) can be found at http://zakalawe.ath.cx:8080/

This is currently running on a core developer's home box:

"I've found some spare hardware to run the build server (Hudson) on, and it seems bearable for me,
but I suspect less pleasant for people on the other end of my cable connection."[1]

So, the server will need a proper home if it moves beyond the prototype stage.

For people who don't know, a build server talks to some slaves, and grabs/builds/tests/packages code. The current server is talking to one slave, which is an Ubuntu VM (Virtual Machine) which is building the 'next' branch on Gitorious. Any slave could be a VM, of course - they use CPU resources while building, but unlike other projects, our commit rate isn't that high - the slaves will be idle most of the time (A Mac slave is also possible, but requires some more work).

Note: If anyone wishes to volunteer a proper server (with a reasonably symmetric connection) to run Hudson, please get in touch using the mailing list or the FlightGear forums - any Unix will do, for Ubuntu/Debian there's an easy apt.get source available. All the setup can be done remotely, given SSH access. The disk, memory, CPU and bandwidth requirements are pretty moderate, due to our low commit rate.

Hosting Options

If you know of any others, please do feel free to add new hosting options here. Some of these are not necessarily useful for directly hosting Hudson, but instead for building FlightGear on different platforms using SSH. This applies in particular to the various build farms.

Status 07/2010

The Mac build is pretty close to producing a nightly, though - we need to fix a genuine (and long-standing) configuration issue on Mac.

The good news - thanks to some great documentation from Fred, the Windows build is stable, using MSVC no less. You can grab a fgfs.exe (and .pdb) from any successful build, and it should 'just work', if you drop it into a current 2.0 install. Future work on some improved packaging of these builds will follow.

The better news - all three platforms are now green (i.e, building correctly), and should update reliably (which will be monitored). At present, the server admin gets emailed when a build breaks, and when it goes green again. These build from Gitorious next, and there will probably be experiments to make it complain to IRC, or even to the mailing list, when the build breaks.

If desired, it is possible to add a mailing list or other individual addresses to the email notifications. Given our commit rate, it is not clear if it warrants a new mailing list or not - it depends how often we break the build :)

For any build, Hudson uses the Git changelogs to report what (and by whom!) is new in the build. Currently the master is being used to do the Linux builds, because it was easy - no particular reason it has to be done that way, though.

Goals

The objective of such systems is that there should be *zero* human steps to create a release - not just out of laziness, but for repeatability. I.e don't write a checklist or 'howto' of creating a release, write a shell script that does the steps. (Or several). And check those scripts into a source control system, too.

In general, such systems are good for capturing how repeatable a build process is - and the experiences on each of the Linux/mac/Windows slaves seem to confirm this

In general, when people report that the 'current code doesn't compile', we can direct them to the Hudson page from now on.

Benefits

  • lets developers know 'instantly' (within a few minutes) if their change broke 'some other platform', for example 64-bit or Mac (or Windows) (this is the big one, but only matters for developers)
  • it can run tests automatically (although right now our test suite is pretty much zero)
  • builds can be archived and uploaded somewhere. This doesn't help Linux much, but on Mac (and Windows, when it works), this means anyone can download a latest build and test it, with no need to install compilers, libraries or anything - just download a .zip and run bleeding-edge-FG.

The catch is, for this to be nice, requires some scripting. The current mac slave produces a zip, but you need to know some Terminal magic to actually run the code (set DYLD_LIBRARY_PATH, basically).

Issues

  • The current mac slave produces a zip, but you need to know some terminal magic to actually run the code (set DYLD_LIBRARY_PATH, basically).

Plans

The configuration is exportable as XML files, the server is currently using the official Hudson apt-get package for Ubuntu, so it's a fairly repeatable setup. Configuring the Windows slave VM with mingw is proving the biggest hassle - OSG is working.

'Soon' there will be a WinXP slave, with a MinGW build. Hopefully this will even extend to a NSIS installer script, if Fred has one lying around. At which point we should have nightly installers available for Windows, and a happier Fred. (A VisualStudio build is also possible, but requires more interaction with someone else, who has an externally-addressable/tunnel-able box with VS installed).


At which point, doing a release means clicking a button on a webpage (on Hudson), and letting the slaves grind away for an hour or so. Magic!

Options

  • Another thing the server can do, is email/IRC people when the build breaks on Linux / FreeBSD / Mac / Win due to a commit - obviously very handy for the devs.
  • So, another step is to email committers directly when they break the build. If you're a commiter to FG or SG, you may see some odd emails
  • set up a buildser...@flightgear.org address.
  • Yet another thing it can do is run test suites - unfortunately we don't have many such tests at the moment.
  • If anyone wants to get into providing nightly .debs or .rpms, that could also be done, but requires people who know those systems, and again can provide a suitable externally address slave to run the builds.
  • If there's other configurations people wish to test (64-bit Linux, in particular), get in touch and they can be added.
  • If it's just for running the monitor, then we probably should talk about putting it onto The MapServer as well
  • Build jobs can run arbitrary shell scripts - they can tag things in CVS or Git, they can create tarballs, upload files to SFTP/FTP servers, the works. So, if Durk/Curt/Fred could codify, somewhere, the steps (in terms of 'things doable in a shell/.bat script') to create an FG pre-release and final-release, the process can be automated.
  • Set up a cross compiler version of gcc at flightgear.org to automatically create binary packages (releases) of FlightGear for platforms such as Win32

Related Discussions