FlightGear build server: Difference between revisions

From FlightGear wiki
Jump to navigation Jump to search
m (→‎Hosting Options: adding IBM' lCDS program: http://www-03.ibm.com/systems/z/os/linux/support/lcds/)
mNo edit summary
Line 1: Line 1:
= Intro =
= Intro =


A *prototype* of a [http://hudson-ci.org/ Hudson] based [http://en.wikipedia.org/wiki/Continuous_integration build server] for building FG (including OSG and SimGear), running on a core developer's home box can be found at http://zakalawe.ath.cx:8080/
A *prototype* of a [http://hudson-ci.org/ Hudson] based [http://en.wikipedia.org/wiki/Continuous_integration build server] for building FG (including OSG and SimGear) can be found at http://zakalawe.ath.cx:8080/


The server will need a proper home if it moves beyond the prototype stage.
This is currently running on a core developer's home box. The server will need a proper home if it moves beyond the prototype stage.


For people who don't know, a build server talks to some slaves, and grabs/builds/tests/packages code. The current server is talking to one slave,  
For people who don't know, a build server talks to some slaves, and grabs/builds/tests/packages code. The current server is talking to one slave,  
which is an Ubuntu VM which is building  The 'next' branch on Gitorious.
which is an Ubuntu VM (Virtual Machine) which is building  the 'next' branch on Gitorious.
Any slave could be a VM, of course - they use CPU while building, but unlike other projects, our commit rate isn't that high - the slaves will be idle most of the time) (A Mac slave is also possible, but requires some more work.
Any slave could be a VM, of course - they use CPU resources while building, but unlike other projects, our commit rate isn't that high - the slaves will be idle most of the time (A Mac slave is also possible, but requires some more work).


'''Note:''' If anyone wishes to volunteer a proper server (with a reasonably symmetric connection) to run Hudson, please get in touch - any Unix will do, for  Ubuntu/Debian there's an easy apt.get source available. All the setup can be done remotely, given SSH access. The disk, memory, CPU and bandwidth requirements  are pretty moderate, due to our low commit rate.
'''Note:''' If anyone wishes to volunteer a proper server (with a reasonably symmetric connection) to run Hudson, please get in touch using the [http://www.flightgear.org/mail.html mailing list] or the [http://flightgear.org/forums/ FlightGear forums] - any Unix will do, for  Ubuntu/Debian there's an easy apt.get source available. All the setup can be done remotely, given SSH access. The disk, memory, CPU and bandwidth requirements  are pretty moderate, due to our low commit rate.


= Hosting Options =
= Hosting Options =
If you know any others, please do feel free to add new hosting options here. Some of these are not necessarily useful for directly hosting Hudson, but instead for building FlightGear on different platforms using SSH. This applies in particular to the various build farms.
* http://gcc.gnu.org/wiki/CompileFarm#How_to_Get_Involved.3F
* http://gcc.gnu.org/wiki/CompileFarm#How_to_Get_Involved.3F
* http://en.opensuse.org/Build_Service
* http://en.opensuse.org/Build_Service
* http://hub.opensolaris.org/bin/view/Community+Group+testing/testfarm
* http://www.metamodul.com/10.html
* http://www.gnu.org/software/hurd/public_hurd_boxen.html
* http://www-03.ibm.com/systems/z/os/linux/support/lcds/
* http://www-03.ibm.com/systems/z/os/linux/support/lcds/


= Status 06/2010 =
= Status 06/2010 =

Revision as of 11:12, 29 June 2010

Intro

A *prototype* of a Hudson based build server for building FG (including OSG and SimGear) can be found at http://zakalawe.ath.cx:8080/

This is currently running on a core developer's home box. The server will need a proper home if it moves beyond the prototype stage.

For people who don't know, a build server talks to some slaves, and grabs/builds/tests/packages code. The current server is talking to one slave, which is an Ubuntu VM (Virtual Machine) which is building the 'next' branch on Gitorious. Any slave could be a VM, of course - they use CPU resources while building, but unlike other projects, our commit rate isn't that high - the slaves will be idle most of the time (A Mac slave is also possible, but requires some more work).

Note: If anyone wishes to volunteer a proper server (with a reasonably symmetric connection) to run Hudson, please get in touch using the mailing list or the FlightGear forums - any Unix will do, for Ubuntu/Debian there's an easy apt.get source available. All the setup can be done remotely, given SSH access. The disk, memory, CPU and bandwidth requirements are pretty moderate, due to our low commit rate.

Hosting Options

If you know any others, please do feel free to add new hosting options here. Some of these are not necessarily useful for directly hosting Hudson, but instead for building FlightGear on different platforms using SSH. This applies in particular to the various build farms.


Status 06/2010

The Mac build is pretty close to producing a nightly, though - we need to fix a genuine (and long-standing) configuration issue on Mac. These build from Gitorious next, and there will probably be experiments to make it complain to IRC, or even to the mailing list, when the build breaks. The Hudson build system is still ticking away (I only boot up the Windows-slave VM occasionally, it's a resource hog)

For any build, Hudson uses the Git changelogs to report what (and by whom!) is new in the build.

Currently the master is being used to do the Linux builds, because it was easy - no particular reason it has to be done that way, though.

It does chew a bit of disk-space, since the master stores the artifacts for the last N builds, where N is configurable. The artifacts are a hundred megabytes or so, since it's all the header files, libs and binaries, though compressed of course.

Goals

The objective of such systems is that there should be *zero* human steps to create a release - not just out of laziness, but for repeatability. I.e don't write a checklist or 'howto' of creating a release, write a shell script that does the steps. (Or several). And check those scripts into a source control system, too.

In general, such systems are good for capturing how repeatable a build process is - and the experiences on each of the Linux/mac/Windows slaves seem to confirm this

In general, when people report that the 'current code doesn't compile', we can direct them to the Hudson page from now on.

Benefits

  • lets developers know 'instantly' (within a few minutes) if their change broke 'some other platform', for example 64-bit or Mac (or Windows) (this is the big one, but only matters for developers)
  • it can run tests automatically (although right now our test suite is pretty much zero)
  • builds can be archived and uploaded somewhere. This doesn't help Linux much, but on Mac (and Windows, when it works), this means anyone can download a latest build and test it, with no need to install compilers, libraries or anything - just download a .zip and run bleeding-edge-FG.

The catch is, for this to be nice, requires some scripting. The current mac slave produces a zip, but you need to know some Terminal magic to actually run the code (set DYLD_LIBRARY_PATH, basically).

Issues

  • The current mac slave produces a zip, but you need to know some terminal magic to actually run the code (set DYLD_LIBRARY_PATH, basically).

Plans

The configuration is exportable as XML files, the server is currently using the official Hudson apt-get package for Ubuntu, so it's a fairly repeatable setup. Configuring the Windows slave VM with mingw is proving the biggest hassle - OSG is working.

'Soon' there will be a WinXP slave, with a MinGW build. Hopefully this will even extend to a NSIS installer script, if Fred has one lying around. At which point we should have nightly installers available for Windows, and a happier Fred. (A VisualStudio build is also possible, but requires more interaction with someone else, who has an externally-addressable/tunnel-able box with VS installed).


At which point, doing a release means clicking a button on a webpage (on Hudson), and letting the slaves grind away for an hour or so. Magic!

Options

  • (Another thing the server can do, is email/IRC people when the build breaks on Linux / FreeBSD / Mac / Win due to a commit - obviously very handy for the devs. Yet another thing it can do is run test suites - unfortunately we don't have many such tests).
  • If anyone wants to get into providing nightly .debs or .rpms, that could also be done, but requires people who know those systems, and again can provide a suitable externally address slave to run the builds.
  • If there's other configurations people wish to test (64-bit Linux, in particular), get in touch and they can be added.
  • If it's just for running the monitor, then we probably should talk about putting it onto The MapServer as well
  • Build jobs can run arbitrary shell scripts - they can tag things in CVS or Git, they can create tarballs, upload files to SFTP/FTP servers, the works. So, if Durk/Curt/Fred could codify, somewhere, the steps (in terms of 'things doable in a shell/.bat script') to create an FG pre-release and final-release, the process can be automated.
  • Set up a cross compiler version of gcc at flightgear.org to automatically create binary packages (releases) of FlightGear for platforms such as Win32

Related Discussions