Linux software audio mixing with FlightGear

From FlightGear wiki
Revision as of 05:46, 1 July 2006 by Pigeon (talk | contribs)
Jump to navigation Jump to search

This article describes, with a soundcard that does not do hardware mixing, how you can run FlightGear with other audio applications, such as Festival for text-to-speech and running TeamSpeak at the same time, with ALSA software mixing (dmix and dsnoop plugin).


Audio mixing what?

One of the most common issues with soundcard and Linux is when you can only have one application playing sound at a time on some systems. This happens when the soundcard itself does not do audio mixing (i.e. hardware mixing).

Audio mixing allows multiple audio (PCM) streams to be "mixed" together and to be played at the same time. It is generally better and more efficient for a soundcard, if available, to do audio mixing in the hardware.

Software audio mixing is useful when you have a soundcard not capable of doing hardware mixing. Under Linux, software mixing can be done by ALSA's dmix / dsnoop plugin, or by userspace applications, like ESD and ARTS. Software uses your CPU to do the work and mixes multiple audio streams into one, before passing the audio data onto your soundcard for playback.

On Windows, software mixing is enabled by default transparently. Because of this, most people would have no idea about the capability of their sound cards in the Windows world. This is debatable whether this is good or bad, and is out of scope of this article.

With ALSA, an easy to test whether your soundcard supports hardware mixing or not would be running this following twice, say in two different terminals:

$ aplay -D hw:0,0 test.wav

This tells aplay, the ALSA audio player, to play the file test.wav using the first sound hardware directly, without using any plugin or any software mixing. For a soundcard without hardware mixing support, the second aplay would fail, and might look like this:

$ aplay -D hw:0,0 test.wav
aplay: main:547: audio open error: Device or resource busy

You could also find out from ALSA's driver info, by doing:

$ cat /proc/asound/pcm 
00-00: Intel ICH : Intel 82801DB-ICH4 : playback 1 : capture 1
00-01: Intel ICH - MIC ADC : Intel 82801DB-ICH4 - MIC ADC : capture 1
00-02: Intel ICH - MIC2 ADC : Intel 82801DB-ICH4 - MIC2 ADC : capture 1
00-03: Intel ICH - ADC2 : Intel 82801DB-ICH4 - ADC2 : capture 1
00-04: Intel ICH - IEC958 : Intel 82801DB-ICH4 - IEC958 : playback 1
01-00: Intel ICH - Modem : Intel 82801DB-ICH4 Modem - Modem : playback 1 : capture 1

Notice the first line, "playback 1", indicating the device only supports one PCM audio playback at any time.

With a soundcard that supports hardware audio mixing, the same check looks like this:

$ cat /proc/asound/pcm 
00-00: emu10k1 : EMU10K1 : playback 32 : capture 1
00-01: emu10k1 mic : EMU10K1 MIC : capture 1
00-02: emu10k1 efx : EMU10K1 EFX : playback 8 : capture 1

As you can see, the first line suggests this device supports 32 simultaneous PCM audio playback.


Prerequisites

This article assumes you have a basic understanding and knowledge of Linux. For instance:

  • ~/ refers to your HOME directory. If your user name is blah, then ~/ normally would be /home/blah/. And so ~/.asoundrc would mean /home/blah/.asoundrc
  • Files with names starting with a dot (".") are normally considered as hidden files. If you're using some sort of file manager application, remember to turn on or enable showing these hidden files.
  • If a "dot rc" file (e.g. .asoundrc, .openalrc) does not exist, simply creates it.

This article assumes you have ALSA setup and working with your soundcard(s).


The concept

It is simple. You want to get all the applications to use the ALSA dmix plugin for PCM audio playback, and sometimes also the dsnoop plugin for audio recording.

According to ALSA, dmix plugin is enabled and used by default since ALSA 1.0.9rc2. Though it seems it does not work straight away for everyone.

There are a lot of pages on the net with different customized ALSA configuration (e.g. .asoundrc) to use "dmix" by default. This article's approach is NOT to touch any ALSA configuration file, but simply per-application setup.

In general, for applications written with ALSA support, usually what you need is to get is to use the device "plug:dmix". For example, with the ALSA command line player aplay, you run:

$ aplay -D "plug:dmix" test.wav

It's a good idea to run this to test if your dmix works.


For applications which does not have ALSA support (i.e. using OSS), you have a few choices:

aoss
A wrapper script to run any application to use ALSA OSS. This works using simply via LD_PRELOAD to preload libaoss.so. It *seems* to use dmix if available automatically.


esd
A software audio mixing daemon, typically shipped with GNOME. You can use its wrapper esddsp with other applications. You could run:
$ esd -d plug:dmix


artsd
Another software audio mixing daemon, typically shipped with KDE. You can use its wrapper artsdsp with other applications. You could run:
$ artsd -D plug:dmix


However, not all apps work with all these wrappers. For example, festival does not seem to work properly with aoss and artsd. If possible it's a good idea to try all three approaches to get an OSS application to do ALSA.

esd does not seem to handle recording properly. So if you have an OSS application which does audio recording (say TeamSpeak), esd / esddsp will probably not work.

OSS? ALSA?

So how do you tell whether an application uses OSS or ALSA? There are a few simple ways.

Usually if an application uses the audio device via /dev/dsp (or /dev/dsp0, /dev/dsp1, etc), then it is probably using OSS. With ALSA, you normally specify an audio device using a string, which might look something like "hw:0,0" or "plug:dmix", or sometimes "ALSA:default", depending on the application itself.

ldd

ldd prints what shared libraries an application depends on.

Take aplay again as an example, you do:

$ ldd /usr/bin/aplay
linux-gate.so.1 =>  (0xffffe000)
libasound.so.2 => /usr/lib/libasound.so.2 (0x40037000)
libm.so.6 => /lib/tls/libm.so.6 (0x400f9000)
libdl.so.2 => /lib/tls/libdl.so.2 (0x4011f000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0x40123000)
libc.so.6 => /lib/tls/libc.so.6 (0x40135000)
/lib/ld-linux.so.2 (0x40000000)

Note the libasound.so.2, which is the ALSA library, which *possibly* means aplay *CAN* use ALSA. It does not have to, however. It's up to an application to use whatever audio support it wants to use. An example would be applications like mplayer and xine, which can do audio playback using many different audio libraries/approaches (OSS, ALSA, ESD, ARTS, and more).

strace

strace is a rather powerful tool, which traces and prints system calls of a running application.

Take mpg123 as an example, you do:

$ strace -f -e open /usr/bin/mpg123 -q test.mp3
open("/etc/ld.so.cache", O_RDONLY)      = 3
open("/lib/tls/libm.so.6", O_RDONLY)    = 3
open("/lib/tls/libc.so.6", O_RDONLY)    = 3
open("/dev/dsp", O_WRONLY)              = 3
open("test.mp3", O_RDONLY)           = 3
open("/dev/dsp", O_WRONLY)              = 4
open("/dev/dsp", O_WRONLY)              = 4

This shows all calls to the system call open() of this running instance of mpg123. As you can see it is opening /dev/dsp, which hints it is using OSS.

For an ALSA application, you will see a lot of open() calls to devices like /dev/snd/controlC0, /dev/aloadC2, /dev/snd/pcmC0D0p, etc. (Try strace with aplay!)


FlightGear

FlightGear uses OpenAL (via SimGear) for audio playback, and OpenAL supports ALSA, ESD and ARTS. However you have to make sure your OpenAL is built with the audio support you need. Run configure --help if you are building your own OpenAL.


To get OpenAL to use a particular ALSA device, put these in your ~/.openalrc

(define devices '(alsa))
(define alsa-out-device "plug:dmix")


To get OpenAL to use ESD:

(define devices '(esd))


And to get OpenAL to use ARTS:

(define devices '(arts))


Festival

There are two ways to get Festival to use ALSA.

ESD

Festival supports a couple of audio method for playback, and one of them is ESD support. To have it use ESD by default, put this in your ~/.festivalrc:

(Parameter.set 'Audio_Method 'esdaudio)

And then run festival as usual. So:

$ festival --server

Of course, you'll have to make sure your ESD is running, for example:

$ esd -d plug:dmix


If for some reasons this does not work, you could always try using esddsp, which means you would be running:

$ esddsp festival --server

Test your festival server by:

$ echo '(SayText "Hello")' | festival_client

Using the Audio_Command parameter

You can configure festival to run another application for playing audio. In this case, we could use aplay to help us. In your ~/.festivalrc put:

(Parameter.set 'Audio_Command "aplay -D plug:dmix -q -c 1 -t raw -f s16 -r $SR $FILE")
(Parameter.set 'Audio_Method 'Audio_Command)

$SR is the sampling rate, and $FILE is the audio data file generated by festival for playback.


TeamSpeak

TeamSpeak works with aoss and artsd / artsdsp. It does not work with esd / esddsp, because esd does not seem to handle recording properly.

aoss

To run TeamSpeak, do:

$ aoss TeamSpeak

arts

You have to run artsd with -d, enabling full-duplex. So:

$ artsd -d -D plug:duplex

You'll need this #~/.asoundrc as well.

Then you have you run TeamSpeak.bin, instead of TeamSpeak, because artsdsp can only handle binary. You will also need to have LD_LIBRARY_PATH set to where TeamSpeak is installed. For example, if you have TeamSpeak installed at /opt/TeamSpeak/, then you should run:

$ LD_LIBRARY_PATH=/opt/TeamSpeak artsdsp /opt/TeamSpeak.bin

~/.asoundrc

Usually you shouldn't need a custom ALSA configuration. But just in case, here is a simple ~/.asoundrc to get you started:

pcm.duplex {
    type asym
    playback.pcm "dmix"
    capture.pcm "dsnoop"
}

pcm.!default {
    type plug
    slave.pcm "duplex"
}

This is the same as specifying "plug:duplex" as the audio device for all applications, for instance:

$ aplay -D plug:duplex test.wav

$ arecord -D plug:duplex test.wav

$ esd -d plug:duplex

$ artsd -d -D plug:duplex

So with this configuration, you do not have to specify which ALSA device applications should use. As long as the application uses ALSA, it will be using the dmix and dsnoop plugins.

With FlightGear/OpenAL, however, I'm not sure what the issue is, but you may still need to specify alsa-out-device in the ~/.openalrc, which is:

(define devices '(alsa))
(define alsa-out-device "plug:duplex")


GNOME and KDE

If you're using GNOME or KDE, you will probably have esd or artsd running already. In this case, you may also want to try this #~/.asoundrc , which set dmix and dsnoop as the default to use for ALSA. You need to restart esd or artsd after making changes to ~/.asoundrc. Make sure your esd or artsd is using ALSA, too.

(TODO: Anyone who uses GNOME or KDE could add more descriptions here, thanks)


When things don't work

  • aoss honours your ALSA configurations, and hence your ~/.asoundrc. You want to make sure it is correct. Use aplay and arecord to test your setup.
  • Any other thing to try is to temporarily rename or move away your ~/.asoundrc, and then test everything again.