rewrite
parent
9e7fc0699f
commit
6af98ef556
@ -0,0 +1,226 @@
|
|||||||
|
# Asterisk MusicOnHold Notes
|
||||||
|
|
||||||
|
|
||||||
|
## History
|
||||||
|
|
||||||
|
This is the first major re-write of this document. Previous versions are available in the git repository. Some of this work is the result of paid work; however the information I am sharing here does not under-cut said contracted work in a way I feel I can't share in this document. Most of it is largely just a better understanding of how musiconhold handles external input since my previous motivation was "just get it working" and not "get it working to fit the specifications of a project".
|
||||||
|
|
||||||
|
## Streaming MusicOnHold
|
||||||
|
|
||||||
|
When it comes to supplying musiconhold, asterisk offers numerous options. The two major ones are:
|
||||||
|
|
||||||
|
- file based playback
|
||||||
|
- custom application supplied playback
|
||||||
|
|
||||||
|
File based playback is just that, asterisk itself directly plays back files in to a channel when called. This seems easy, but it comes with a couple of drawbacks. The first is that it provides an expierence nothing like classic music on hold systems, starting every single instance at the beginning of a track. This also means it calls a playback instance for every person placed on hold; which can eat in to system resources if you've got a few hundred threads decoding files for a few hundred callers on hold. You really don't want your PBX machine doing a whole lot because real-time communication protocols aimed at low latency aren't waiting around for your data. In fact this could be an arguement against musiconhold in general; however if you pretend that not having it is not an option, then it's a good idea to reduce the amount of load other processes have to do.
|
||||||
|
|
||||||
|
Using a streaming source for musiconhold reduces a couple of these issues while inserting it's own. If configured properly, you can feed Asterisk the native format for your PBX; meaning it literally just has to pull this stream and pipe it in to channels. No conversion required...at least that Asterisk has to do. The audio is still converted somewhere; it just may be outside of Asterisk itself. Asterisk may still process the audio, as in the case with VOLUME, resulting in a conversion. However, when pulling from a streaming source; you aren't generating an extra thread for each caller placed on hold. You have a single source opposed to generating multiple. It does, however, restore that feeling of a traditional hold system; where you may get dumped in to it mid-song.
|
||||||
|
|
||||||
|
It does not, however, fix the "bad quality of music on hold music". That is a much more complicated matter.
|
||||||
|
|
||||||
|
### Just Tell Asterisk What Codec It Is And Pipe It In
|
||||||
|
|
||||||
|
All configuration for musiconhold is done in musiconhold.conf:
|
||||||
|
|
||||||
|
```
|
||||||
|
[default]
|
||||||
|
mode = custom
|
||||||
|
application = /path/to/exe -and -additional arguments if necessary
|
||||||
|
format = slin16
|
||||||
|
digit = 1
|
||||||
|
```
|
||||||
|
|
||||||
|
While the options can get quite complex; for the sake of an Icecast stream, these are the only lines we have to worry with. It specifies a class-name, mode, the application we call, the format, and the optional digit. When res_musiconhold module is started (or reloaded); asterisk will run the listed application and expect raw samples matching the declared format over stdin; meaning your application has to support stdout. Let's take a look at two entries from my server as an example:
|
||||||
|
|
||||||
|
```
|
||||||
|
[prine]
|
||||||
|
mode = custom
|
||||||
|
application = /usr/bin/ogg123 -q -d raw -f - http://127.0.0.1:8000/prine.ogg
|
||||||
|
format = slin16
|
||||||
|
digit = 4
|
||||||
|
|
||||||
|
[pacr]
|
||||||
|
mode = custom
|
||||||
|
application = /usr/bin/ogg123 -q -d raw -f - http://127.0.0.1:8000/xmas.ogg
|
||||||
|
format = slin
|
||||||
|
digit = 5
|
||||||
|
```
|
||||||
|
|
||||||
|
In both of these cases; we are directing ogg123 to output raw samples to stdout (`-f -`) from a locally hosted ogg stream. Ogg123 outputs PCM by default at the stream's native sample rate; so we call the `slin` and `slin16` codecs; `slin` is 8khz PCM audio, while `slin16` calls an internal sample rate converter set to 16khz. For the sake of optimization; I should use 8khz across the entire selection of streams. However, figuring out exactly how the format declarations worked in musiconhold.conf is a rather recent thing I bothered doing. I don't remember why I originally went with 16khz on the initial ones anyway. I think I encountered it as a limitation in the very *very* early attempts at getting it to work, likely abandoned whatever had that limitation, and just never changed.
|
||||||
|
|
||||||
|
### Caller-Selectable MusicOnHold
|
||||||
|
|
||||||
|
If you have multiple musiconhold streams; you can configure them so the caller can change the current channel from the keypad.
|
||||||
|
|
||||||
|
```
|
||||||
|
digit=
|
||||||
|
```
|
||||||
|
|
||||||
|
Add this option to your musiconhold class configuration, and select a single DTMF digit (I've not tested * or # but should work). When the caller is on hold; pressing the appropiate digit will switch the channel.
|
||||||
|
|
||||||
|
### CPU Usage
|
||||||
|
|
||||||
|
There is a very slight difference in the amount of CPU usage for an 8khz stream vs a 16khz stream. After an uptime of 23 hours; here is the amount of CPU time consumed by each:
|
||||||
|
|
||||||
|
- ogg123 16khz: 2:35.07
|
||||||
|
- ogg123 8khz: 1:40.30
|
||||||
|
|
||||||
|
It's only a difference of a minute in CPU time; however that's a start indicator that feeding it 8khz ogg does result in some CPU savings.
|
||||||
|
|
||||||
|
|
||||||
|
## Using A Script For Application - Original Method
|
||||||
|
|
||||||
|
Originally I came up with this roundabout method of getting mplayer involved; as it could literally just output 8khz mulaw directly in to Asterisk. The downside is that mplayer doesn't support stdout. The easy way around this was to just call mplayer with a script, write to a named pipe, and then cat the pipe to stdout.
|
||||||
|
|
||||||
|
```
|
||||||
|
PIPE="/tmp/asterisk-cmoh-pipe.$$"
|
||||||
|
mknod $PIPE p
|
||||||
|
|
||||||
|
|
||||||
|
mplayer http://server:port/mountpoint.ogg -softvol -really-quiet -quiet -ao pcm:file=$PIPE -af resample=8000,channels=1,format=mulaw,volume=-6:0 2>/dev/null | cat $PIPE 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
I ran this for over a year. It worked; hence I never bothered to improve upon it. In fact; the script method was the easiest way to make sure each ices2 had started and was streaming:
|
||||||
|
|
||||||
|
```
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
if [ `ps -aux | grep -c "/path/to/ices/conf.xml"` -lt 2 ]; then
|
||||||
|
/usr/bin/ices2 /path/to/ices/conf.xml
|
||||||
|
fi
|
||||||
|
PIPE="/tmp/asterisk-cmoh-pipe.$$"
|
||||||
|
mknod $PIPE p
|
||||||
|
|
||||||
|
|
||||||
|
mplayer http://server:port/mountpoint.ogg -softvol -really-quiet -quiet -ao pcm:file=$PIPE -af resample=8000,channels=1,format=mulaw,volume=-6:0 2>/dev/null | cat $PIPE 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
I changed distributions a while back (and I may switch again) and was greeted with an mplayer that tried to use 100% CPU for each stream. As a quick fix; I replaced the mplayer line with one involving ogg123 and sox. This same combination would be used in an alternate "icecast free" setup that was abandoned.
|
||||||
|
|
||||||
|
```
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
if [ `ps -aux | grep -c "/path/to/ices/conf.xml"` -lt 2 ]; then
|
||||||
|
/usr/bin/ices2 /path/to/ices/conf.xml
|
||||||
|
fi
|
||||||
|
PIPE="/tmp/asterisk-cmoh-pipe.$$"
|
||||||
|
mknod $PIPE p
|
||||||
|
ogg123 -q -d au -f - http://127.0.0.1:8000/mountpoint.ogg | sox -r 16000 -t au - -r 8000 -c 1 -t raw - 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
However...figuring out how to just tell Asterisk to take in 16khz from ogg123 directly eliminated the need for sox in the chain. The ultimate need for the script was eliminated after I got the replacement distro not just starting ices streams reliably; but it was easy to tell it asterisk depended on ices...which depends on icecast.
|
||||||
|
|
||||||
|
However the script remains perfectly valid if you have to use named pipes or really want to ensure you've got a source streaming before Asterisk attempts to run the application.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
In just about every case, the following commands are you friends:
|
||||||
|
|
||||||
|
`module reload res_musiconhold` in the Asterisk console causes it to reload musiconhold and activate any changes. Running streams aren't affected.
|
||||||
|
`sudo asterisk -rx 'module reload res_musiconhold'` on the command line accomplishes the same thing, without being in asterisk terminal.
|
||||||
|
|
||||||
|
As briefly mentioned; any streams that are running and not changed are not affected, applications no longer used are exited, and applications not running are launched.
|
||||||
|
|
||||||
|
### HELP! res_musiconhold.c:701 monmp3thread: poll() failed: Interrupted system call IS FILLING MY ASTERISK CONSOLE!
|
||||||
|
|
||||||
|
Blindly type `exit` and hit enter; this should put you back at the shell. Something went fatally wrong with your playback application; it either didn't load, not sending data, or sending really bad data. Go back to `musiconhold.conf`, undo whatever you just changed, save the file, and return to console. Run `sudo asterisk -rx 'module reload res_musiconhold'`. You should get confirmation that the module was reloaded. Return to the asterisk console and it should have stopped. You will need to figure out what failed from there.
|
||||||
|
|
||||||
|
In some cases; doing this may cause Asterisk to crash. Sorry. I don't know why that happens.
|
||||||
|
|
||||||
|
### No Audio!
|
||||||
|
|
||||||
|
"No Audio" is difficult because it can be either due to the lack of an audio stream causing the playback application to fail; or an error in the command you call for the playback application. You'll need to determine which one of the two it is, make corrections, and reload the module.
|
||||||
|
|
||||||
|
### Everything sounds like demons/chipmunks.
|
||||||
|
|
||||||
|
Playback speed wrong? This is a sample rate error. Check you're using the right slin codec option. It is entirely possible to feed Asterisk 16khz PCM and have it play it back at 8khz; or vise versa. Fix the format and reload.
|
||||||
|
|
||||||
|
### It's just static!
|
||||||
|
|
||||||
|
This is a non-fatal codec mismatch. I've seen it in times where mplayer was outputting mulaw but the Asterisk server was expecting PCM; or vise-versa.
|
||||||
|
|
||||||
|
### The Audio On The Phone Is Way Behind What I Get Out Of Icecast
|
||||||
|
|
||||||
|
I haven't figured this one out. I don't even know how it would buffer 15 minutes of audio unless it's the result of a dropped sample here or there.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Experimental Non-Icecast Method I Don't Recommend
|
||||||
|
### AKA "Accidental Instructions For System Wide Pulseaudio"
|
||||||
|
|
||||||
|
|
||||||
|
***This entire thing winds up using entirely too much CPU. It's not just more than the above method, it's entirely too much for the task (in my opinion). The above method uses less than 1% CPU per stream; the below method uses about 15% per stream. It's only real value is instructions on doing system-wide PulseAudio.***
|
||||||
|
|
||||||
|
Let me start by saying this: getting this to work started out as medium-hard; before going full fledged pull-my-hair-out. This is because there's a *number* of additional steps we have to do that are much more involved than setting up and configuring icecast and ices2. Most of these steps weren't documented and I had to piece them together. This was on top of the fact that debugging on a headless machine for sound problems posed limitations. I had to test things on a local VM before attempting to implement them on the remote machine.
|
||||||
|
|
||||||
|
The concept sounds simple; just play some files from a playlist to a pipe; pipe it around; pipe it in to Asterisk. Okay, hold up. It sounds easy; but is it? There's a little issue in that if you tell most players you want to play back to a file and not a sound device; they decode, not play. We don't want, need, or can handle this. We need to simulate real-time playback. The only way to get this done effectively...was pulseaudio. I tried ALSA but this broke on ubuntu; they don't actually include the stuff to configure it. So you'll have a working alsa system; install alsa-utils, and lose it all.
|
||||||
|
|
||||||
|
But while we can create null sinks in PulseAudio stupid easily; there's a *major drawback*. It usually runs at the userlevel. Why does this matter? The asterisk user **never actually logs in** to the system...and therefore pulseaudio server never gets executed. So log in and activate? I tried that. So what you have to do...is run pulseaudio as system-wide and then configure it so the non-authenticated asterisk account can actually use it.
|
||||||
|
|
||||||
|
Install pulseaudio: `sudo apt install pulseaudio`
|
||||||
|
|
||||||
|
Create `/etc/systemd/system/pulseaudio.service` and fill it with this:
|
||||||
|
|
||||||
|
```
|
||||||
|
[Unit]
|
||||||
|
Description=PulseAudio Daemon
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
PrivateTmp=true
|
||||||
|
ExecStart=/usr/bin/pulseaudio --system --realtime --disallow-exit --no-cpu-limit
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
Open `/etc/pulse/system.pa` in an edior and modify the `load-module module-native-protocol-unix` line:
|
||||||
|
|
||||||
|
```
|
||||||
|
load-module module-native-protocol-unix auth-anonymous=1
|
||||||
|
```
|
||||||
|
|
||||||
|
*(Yes, you are just adding auth-anonymous=1 to it)*
|
||||||
|
|
||||||
|
Now you'll want the null-sinks to persist; so add this line to the bottom of system.pa:
|
||||||
|
|
||||||
|
`load-module module-null-sink sink_name=moh1`
|
||||||
|
|
||||||
|
You can duplicate this line for as many null-sinks as you want. *You don't need a sink for each stream.*
|
||||||
|
|
||||||
|
Now refresh services and boot everything up.
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo systemctl daemon reload
|
||||||
|
sudo systemctl enable pulseaudio
|
||||||
|
sudo systemctl start pulseaudio
|
||||||
|
```
|
||||||
|
|
||||||
|
Got it? Good. That's it. That took me ***forever*** to figure out. You're welcome.
|
||||||
|
|
||||||
|
|
||||||
|
For the rest of the stuff we're using ogg123 and sox; so install `vorbis-tools` and `sox`.
|
||||||
|
|
||||||
|
Create a shell script, somewhere asterisk can read it:
|
||||||
|
|
||||||
|
```
|
||||||
|
#!/bin/bash
|
||||||
|
ogg123 -qZ -d pulse -o sink:moh1 -d au -f - -@ /etc/asterisk/list.m3u 2>/dev/null | sox -r 16000 -t au - -r 8000 -c 1 -t raw - 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
That's it. It's a one-liner; but since it's piped we can't just stick it in musiconold.conf. We don't need named pipes beause ogg123 and sox both support stdin/stdout.
|
||||||
|
*I'm probably wrong. Being a single line it probably will work directly as an application call in musiconhold.conf. If it wasn't such a lousy method I'd verify...but who cares. This method sucks.*
|
||||||
|
|
||||||
|
ogg123 randomly plays from the playlist to sink:moh as well as stdout. The playback to the sink is just to limit the thing to realtime playback. It's literally the only reason we need Pulse.
|
||||||
|
After that we pipe it in to sox, which resamples it from it's 16khz to 8khz and spits it to stdout. You can totally do this with just mplayer in a similar manner as above, just swapping your icecast playlist for the local one; keeping the same named pipe method. The problem I had with this (and went down this cliff of a different method); really nasty clicks between tracks. It was something between one player and the pipe.
|
||||||
|
|
||||||
|
|
||||||
|
When I originally came up with this ogg123/sox pipe; I had the hopes that it would be less CPU intensive. Fewer processes running, less complex, more streamline. But the reality was entirely different. After a couple of days I checked my load averages; they were somewhere between .38 and .48 over that period. This was pretty high, espeically compared to what the icecast/ices/mplayer method pulled...even with that one process constantly slamming for 100% CPU. Without running any of the MOH streams, the system load was back down to .01 over the course of about a half hour. I did some debugging and fixed that misbehaving mplayer setup, loaded it all up, and checked the load average after a few hours. It was only .02.
|
||||||
|
|
||||||
|
The Icecast2/ices2/mplayer method mentioned above is much more efficient CPU wise than the other method. But it's still something I'll keep up as an option.
|
||||||
|
|
||||||
|
Your MOH streams may "skip" or have other minor defects for the first few minutes. Mine cleared up after a few minutes and sounded like they had less buffer issues than the icecast.
|
||||||
|
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue