Recursively find topmost SVN directory

I’ve been doing some refactoring on the build system for our transcoding package ( for the Sangoma Transcoding D-Series cards. It is just a simple Makefile, however, there are some non-open source bits that also need to be built and integrated into the public release, we used to have a convoluted script that copied source files, makefiles, binaries and other stuff into the proper release structure that is made available to customers. I’ve never liked such approach, because you end up with 2 different file trees, the one for development and the one released. While there might be situations where that is necessary, in this case was not and it was confusing from a developer point of view because the same source file may be under /some/public/path/to/file.c and /other/private/file.c depending on whether you are on the svn tree or the release .tar.gz tree. The public tree should be simply a subset of the private one (which contains the non-open source bits).

Anyhow, I had never really spent some quality time understanding Makefiles, I knew the basics to create targets and stuff, but this time I decided it was time to really dig into the fascinating and sometimes frustrating world of Makefiles. While I was doing this refactoring, I had the need to arbitrarily find the topmost directory of our svn tree. I’ve seen some projects require you to execute a shell script at the top directory to set such variables, something like:

export ROOT_SVN=$(pwd)

Then it is required that you execute that script before building anything. Then makefiles that can be included from different directory levels can use that variable to refer to the root directory of the project and not depend on their location using relative paths (which can change depending on who is including that file).

I had to ask myself, do I really need a script before building just to figure out what the topmost svn directory is? Being an amateur Makefile writer it took me one hour figure out a function to do it, and this is what I came out with:

find_root = $(if $(wildcard $1/.svn),$(if $(filter-out /,$1),$(call find_root,$(realpath $1/..),$1),$1),$2)
SVN_ROOT := $(call find_root,$(shell pwd),$(shell pwd))

Ta da!

That should recursively look for the topmost directory with an .svn entry in it, if / is reached, then / is returned. It is easily changed to look for any other file (like .git).

I could not find any solution using google, so I thought in dropping this here.

I use the native Makefile functions $(if ), $(wildcard ), $(filter-out ) and $(realpath ), you can find documentation for such functions in the Make documentation here:

Posted in Makefiles | Tagged , | Leave a comment

Astricon 2010

Today (or yesterday, since it is already 1:08am), I spoke at Astricon about the PRI passive call recording feature I developed the last year.


Presentation in PDF and PPT available here:

Posted in asterisk, sangoma | Leave a comment

select system call limitation in Linux

Try finding a network sample code of how to accept TCP connections and most likely you will find the select() system call in such code. The reason being that select() is the most popular (but not the only one as we will see) system call to wait for I/O in a list of file descriptors.

I am here to warn you, select() has some important limitations to be aware of. I must confess I used select for a long time without realizing its limitations, until, of course, I hit the limits.

About a year ago I started porting a heavily threaded networking real-time voice application for Windows to Linux. When the code was compiling and running apparentely without issues, we started doing scalability and stress tests. We used 32 telephony E1 cards (pretty much like a network card) where each E1 port can handle up to 30 calls. So we’re talking about 32 * 30 = 960 calls in a single server. Knowing in advance that I would need lots of file descriptors (Linux typically defaults to 1024 per process), we used the setrlimit() system call to increase the limit up to 10,000 which should be more than enough because a telephony call in this system requires about 4 file descriptors (for the network VoIP connection and the E1 side devices etc).

At some point during the stress test, calls stopped working and some threads were going crazy eating up 99% CPU. After finding out using “ps” and “pstack” which threads were the ones going crazy, I found out that were the ones waiting for I/O in some file descriptors using select(), like the embedded HTTP server or the Network-related code.

Reading carefully the select documentation you will find the answer by yourself. “man select” says:

“An fd_set is a fixed size buffer. Executing FD_CLR or FD_SET with a value of fd that is negative or is equal to or larger than FD_SETSIZE will result in undefined behavior.”

So, big deal you may say, you can split across threads the load to not have more than 1024 file descriptors in your select() call, right? WRONG! read it twice, the problem is with the highest file descriptor value provided in a fd_set structure, not the number of file descriptors in the fd set.

These 2 numbers are related but are in no way the same. Let’s say you have program that opens 2000 text files (either with open() or fopen()) to read from them and scan for a list of words, at the same time each time you hit a word you must connect and send a network message to some TCP server and read data from the TCP server too. Probably you would launch some threads for reading on the files and another thread to handle the network connection related data. Event though only 1 thread is using select() and that thread is providing select() with just 1 file descriptor (the TCP server connection), you cannot guarantee which file descriptor number will be assigned to that network connection. You could try to ensure that you always start the TCP connection before opening any files so you will get a lower-number file descriptor, but that is a very shaky design, if you later want to do other stuff that requires the use of file descriptors (use pipes, unix sockets, etc) you may hit the problem again.

The limitation comes from the way select() works, most concretely the data type used to represent the list of file descriptors, fd_set. Let’s take a look at fd_set in /usr/include/sys/select.h

You will see a definition pretty much like:

typedef struct  {
    long int fds_bits[32];
} fd_set;

I removed a bunch of macros to make the code more clear. As you see you have a static array of 32 long ints. This is just a bit map, where each bit represents a file descriptor number. 32 * sizeof(long int), for 32 bit platforms is 1024. So, if you do fd_set(&fd, 10), to add file descriptor 10 to an fd_set, it will just set to 1 the 10th bit in the bit map, what happens then if you do fd_set(&fd, 2000) ?, you guessed right, unpredictable. May be some sort of array overflow (fd_set is, at least on my system, implemented using the assembly instruction btsl, bit test and set).

Be aware also, all of this is on Linux, I am not sure about how select is implemented in other operating systems, like Windows. Given that we did not notice this problem on Windows servers, probably select is implemented differently.

Solution? use poll (or may be epoll when available). These 2 system calls are not as popular and available in most operating systems as select is, but whenever possible, I recommend using poll. There may be differences in the performance, for example, using FD_ISSET() is faster (just checking if a given bit is set in the bitmap) than iterating over the poll array, but I have found in my applications that the difference is just not critical.

In short, next time you find using select(), think it twice before deciding that is what you need.

Posted in C/C++ | 9 Comments

I hate SELinux

I am not a security-savvy person, even though I know pretty well how to code defensively to avoid security issues in my C code, my security knowledge in a Linux system is pretty average (use firewall, do not run services as root etc). No wonder I really don’t know how to use or configure SELinux. But there is one thing I know about it. It can be a pain in the ass sometimes. This blog post is to describe 2 problems I have faced with SELinux. The easy solution would have been to disable SELinux. So I’ll start by showing you how to disable it in case you don’t want to mess with it at all.

– To disable SELinux. Edit /etc/selinux/config and change the SELINUX policy from enforcing to


Be aware that you are disabling security “features”, whatever that means. So you may want to read this other article about disabling SELinux.

I wasn’t lucky enough to have SELinux disabled as an option. Developing with SELinux enabled is a good idea so you can notice compatibility problems with SELinux in your development environment before a customer that really *needs* SELinux enabled discovers them in your software, which, from the customer point of view is a plain bug.

The first problem I noticed after some CentOS upgrade, change the hard-drive or something along those lines, was that the command “ping” wasn’t working, it’s been a while since I had the problem so I don’t quite remember the exact error when pinging other systems, but it was most likely something like permission denied. Probably other network commands did not work either, but I could just notice ping at that moment. So, I used the good old strace to find out what was causing the problem.

The underlying ping problem was because the open() system call was failing with EACCESS when trying to open /etc/hosts. However I was able to “cat /etc/hosts”. So it wasn’t a simple permission problem, but a bit more complex SELinux problem. Eventually I found out that the solution was:

restorecon reset /etc/hosts

At which point the SELinux security context for this file got screwed up? I certainly don’t know. But that command restored it. The Z option of the ls command will show you the SELinux security context for any file.

ls -Z /etc/hosts

The second problem was that some libraries of one of the programs I am responsible for were not being loaded. Again, the problem was due to permission denied errors, this time when loading the shared libraries that required text relocation.

The solution was to recompile the shared libraries with the -fPIC option.

I am sure SELinux has its uses, however I have the feeling that sometimes makes things more complicated than needed in some environments. I recommend reading this blog post and particularly the comments in there.

Posted in C/C++, GNU/Linux General, linux | 2 Comments


Some weeks ago I found this program called Graphviz, seems rather interesting, can’t wait to use it in any future project documentation I get involved with. Particularly useful to design/document state machines that are common in telephony.

Posted in GNU/Linux General | Leave a comment

Quick tip for debugging deadlocks

If you ever find yourself with a deadlock in your application, you can use gdb to attach to the application, then sometimes you find one of the threads that is stuck trying to lock mutex x. Then you need to find out who is currently holding x and therefore deadlocking your thread (and equally important why the other thread is not releasing it).

At least on recent libc implementations in Linux, the mutex object seems to have a member named “__owner”. Let me show you what I recently saw when debugging a deadlocked application.

(gdb) f 4
#4  0x0805ab46 in ACE_OS::mutex_lock (m=0xa074248) at include/ace/OS.i:1406
1406      ACE_OSCALL_RETURN (ACE_ADAPT_RETVAL (pthread_mutex_lock (m), ace_result_),
(gdb) p *m
$8 = {__data = {__lock = 2, __count = 0, __owner = 17828, __kind = 0, __nusers = 1, {__spins = 0, __list = {__next = 0x0}}},
  __size = "\002\000\000\000\000\000\000\000�E\000\000\000\000\000\000\001\000\000\000\000\000\000", __align = 2}

We can see that the __owner is 17828. This number is the LWP (Light-weight process) id of the thread holding the lock. Now you can go to examine that thread stack and find out why that thread is also stuck.

This example also brings up a regular point of confusion for some Linux application developers. What is the difference between LWP and POSIX thread id ( the pthread_t type in pthread.h)?. The difference is that pthread_t is a user space concept, is simply an identifier for the thread library implementing POSIX threads to refer to the thread and its resources, state etc. However the LWP is an implementation detail of how the Linux kernel implements threads, which is done through the “thread group” concept and LWP’s, that are processes that share memory pages and other resources with the other processes in the same thread group.

From the Linux kernel point of view the pthread_t value doesn’t mean anything, the LWP id is how you identify threads in the kernel, and they share the same numbering as regular processes, since LWPs are just a special type of process. Knowing this is useful when using utilities like strace. When you want to trace a particular thread of a multi threaded application, you need to provide the LWP of the thread you want to trace, a common mistake is to provide the process id, which in a multithreaded application the process id is just the LWP of the first thread in the application (the one that started executing main()).

Here is how you get each identifier in a C program:

#include <stdio.h>
#include <syscall.h>
#include <pthread.h>
int main()
  pthread_t tid = pthread_self();
  int sid = syscall(SYS_gettid);
  printf("LWP id is %d\n", sid);
  printf("POSIX thread id is %d\n", tid);
  return 0;

It’s important to note that getting the POSIX thread id is much faster than the LWP, because pthread_self() is just a library call and libc most likely has this value cached somewhere in user space, no need to go down to the kernel. As you can see, getting the LWP requires a call to the syscall() function, which effectively executes the requested system call, this is expensive (well, compared with the time required to enter a simple user space function).

Posted in C/C++, linux | 2 Comments

Sangoma Tapping Solution for Asterisk

Sangoma has offered a tapping device for T1/E1 lines for quite some time now. This physical tapping works by installing a PN 633 Tap Connection Adapter, which looks like this:

Sangoma Tapping

As you can see, you will receive the line from the CPE and plug it into the PN 633 and then plug the line from the NET too. At the other side of the adapter you can pull out 2 cables and connect them to your box with Sangoma boards specially configured in high impedance mode, which basically means will be passive and not affect the behavior of your PRI link, passively will be monitoring the E1/T1 link.

As you can tell from the image, there is also 2 cables involved for a single link, this means that you need an A102 card to monitor just 1 E1/T1 link, because one port is used for tx from the CPE and the other for tx from the NET. Until today, it was not possible to use Asterisk because Asterisk assumes each card port is meant to send and receive data for a circuit, Asterisk was meant to be an active component of the circuit, not a passive element.

Now Asterisk can do that. I just submitted 2 patches to Digium’s bug tracker. One for LibPRI, the library that takes care of the Q921 and Q931 signaling and the other for chan_dahdi.c, the Asterisk channel driver used to connect it to PSTN circuits.

LibPRI needed to be modified because it does a lot of state machine checking when receiving a Q921/Q931 frame, for example, if receives a Q931 message “PROCEEDING”, is going to check that a call was already in place and a SETUP message was previously sent, when working as a passive element in the circuit no state checks should be done, we just need to decode the message and return a meaningful telephony event to Asterisk (like a RING event when the SETUP message is seen on the line).

The most important change is this:

pri_event *pri_read_event(struct pri *pri)
        char buf[1024];
        int res;
        res = pri-&gt;read_func ? pri-&gt;read_func(pri, buf, sizeof(buf)) : 0;
        /* this check should be at some routine in q921.c */
        /* at least 4 bytes of Q921 and at check buf[4] for Q931 Network packet */
        if (res &lt; 5 || ((int)buf[4] != 8)) {
                return NULL;
        res = q931_read_event(pri, (q931_h*)(buf + 4), res - 4 - 2 /* remove 4 bytes of Q921 h and 2 of CRC */);
        if (res == -1) {
                return NULL;
        if (res &amp; Q931_RES_HAVEEVENT) {
                return &amp;pri-&gt;ev;
        return NULL;
/* here we trust receiving q931 only (no maintenance or anything else)*/
int q931_read_event(struct pri *ctrl, q931_h *h, int len)
        q931_mh *mh;
        q931_call *c;
        int cref;
        int missingmand;
        //q931_dump(ctrl, h, len, 0);
        mh = (q931_mh *)(h-&gt;contents + h-&gt;crlen);
        cref = q931_cr(h);
        c = q931_getcall(ctrl, cref &amp; 0x7FFF);
        if (!c) {
                pri_error(ctrl, "Unable to locate call %d\n", cref);
                return -1;
        if (prepare_to_handle_q931_message(ctrl, mh, c)) {
                return 0;
        missingmand = 0;
        if (q931_process_ies(ctrl, h, len, mh, c, &amp;missingmand)) {
                return -1;
        return post_handle_q931_message(ctrl, mh, c, missingmand, 0);

The rest is pretty much just modify post_handle_q931_message to skip state machine checking when invoked with the last parameter as 0, which is only done from q931_read_event, that is the passive q931 processing routine I added.

Asterisk needed to be modified because it needs to correlate that a RING event received, let’s say in span 1 and channel 1, is probably going to be related to a PROGRESS message in span 2 channel 1 (which is the other side of the connection replying to the SETUP Q931 message). Furthermore, once the PROGRESS message is received, Asterisk must launch a regular channel and then create a DAHDI conference (using DAHDI pseudo channels) to mix the audio transmitted by the CPE and NET and return this mixed audio as a single frame to any Asterisk application reading from the channel.

Some interesting code snippet of the changes to Asterisk is where I create the DAHDI conference to do the audio mixing, which is later retrieved by the core via the channel driver interface interface dahdi_read().

This is done during dahdi_new() routine, when creating the new Asterisk channel.

        if (i-&gt;tappingpeer) {
                struct dahdi_confinfo dahdic = { 0, };
                /* create the mixing conference
                 * some DAHDI_SETCONF interface rules to keep in mind
                 * confno == -1 means create new conference with the given confmode
                 * confno and confmode == 0 means remove the channel from its conference */
                dahdic.chan = 0; /* means use current channel (the one the fd belongs to)*/
                dahdic.confno = -1; /* -1 means create new conference */
                fd = dahdi_open("/dev/dahdi/pseudo");
                if (fd &lt; 0 || ioctl(fd, DAHDI_SETCONF, &amp;dahdic)) {
                        ast_log(LOG_ERROR, "Unable to create dahdi conference for tapping\n");
                        i-&gt;owner = NULL;
                        return NULL;
                i-&gt;tappingconf = dahdic.confno;
                i-&gt;tappingfd = fd;
                /* add both parties to the conference */
                dahdic.chan = 0;
                dahdic.confno = i-&gt;tappingconf;
                dahdic.confmode = DAHDI_CONF_CONF | DAHDI_CONF_TALKER;
                if (ioctl(i-&gt;subs[SUB_REAL].dfd, DAHDI_SETCONF, &amp;dahdic)) {
                        ast_log(LOG_ERROR, "Unable to add chan to conference for tapping devices: %s\n", strerror(errno));
                        i-&gt;owner = NULL;
                        return NULL;
                dahdic.chan = 0;
                dahdic.confno = i-&gt;tappingconf;
                dahdic.confmode = DAHDI_CONF_CONF | DAHDI_CONF_TALKER;
                if (ioctl(i-&gt;tappingpeer-&gt;subs[SUB_REAL].dfd, DAHDI_SETCONF, &amp;dahdic)) {
                        ast_log(LOG_ERROR, "Unable to add peer chan to conference for tapping devices: %s\n", strerror(errno));
                        i-&gt;owner = NULL;
                        return NULL;
                ast_log(LOG_DEBUG, "Created tapping conference %d with fd %d between dahdi chans %d and %d for ast channel %s\n",
                i-&gt;tappingpeer-&gt;owner = i-&gt;owner;

Later we sent the Asterisk channel to the dial plan.

                /* from now on, reading from the conference has the mix of both tapped channels, we can now launch the pbx thread */
                if (ast_pbx_start(c) != AST_PBX_SUCCESS) {
                        ast_log(LOG_ERROR, "Failed to launch PBX thread for passive channel %s\n", c-&gt;name);

And finally when the Asterisk core requests a frame from chan_dahdi we return the mixed audio.

       /* if we have a tap peer we must read the mixed audio */
        if (p-&gt;tappingpeer) {
                /* passive channel reading */
                /* first read from the 2 involved dahdi channels just to consume their frames */
                res = read(p-&gt;subs[idx].dfd, readbuf, p-&gt;subs[idx].linear ? READ_SIZE * 2 : READ_SIZE);
                res = read(p-&gt;tappingpeer-&gt;subs[idx].dfd, readbuf, p-&gt;subs[idx].linear ? READ_SIZE * 2 : READ_SIZE);
                /* now read the mixed audio that will be returned to the core */
                res = read(p-&gt;tappingfd, readbuf, p-&gt;subs[idx].linear ? READ_SIZE * 2 : READ_SIZE);
        } else {
                /* no tapping peer, normal reading */
                res = read(p-&gt;subs[idx].dfd, readbuf, p-&gt;subs[idx].linear ? READ_SIZE * 2 : READ_SIZE);

Finally, you can use regular dial plan Asterisk rules to Record() the conversation, Dial() to someone interested in auditing the call etc. Of course, any audio transmitted to this passive channel will be dropped, therefore using applications like Playback() in this channels just don’t make sense, you can still do it though, but all the audio will be silently dropped. Also remember this is a regular channel (that just happens to ignore any transmitted media or signaling), therefore you can still retrieve the ANI, DNID etc.

exten =&gt; s,1,Answer()
;exten =&gt; s,n,Dial(SIP/moy)
exten =&gt; s,n,Record(advanced-recording%d:wav)
exten =&gt; s,n,Hangup()
exten =&gt; _X.,1,Goto(s,1)

The configuration required is minimal. Sangoma board configuration is described here. It’s not much different than configuring a regular T1/E1 link though. In the Asterisk side, you just have to specify the parameter “tappingpeerpos=next” or “tappingpeerpos=prev” in chan_dahdi.conf to specify which is the peer tapping span for the current span. If you set “tappingpeerpos=no” or any other value for that matter, tapping will be disabled for that span (and then will be a regular active span).

I have the code already in 3 branches. One branch for libpri and 2 branches for Asterisk, one based on trunk and the other in Asterisk 1.6.2, keep in mind that the one from 1.6.2 has a slightly different configuration at this point, the parameter to enable tapping is “passive=yes”, this does not let you specify if the peer tapping device is the next or previous one, therefore assumes your tapping spans always start at an even number (0, 2, 4 etc), I will change that soon, hopefully …

Now everytime a new call is detected you will receive a call that you can send wherever you want 🙂


Posted in asterisk, sangoma | 41 Comments

New Project – Sangoma Bridge

A couple of months ago I wrote a little application for Regulus Labs. The application is a simple daemon bridge between Sangoma E1 devices receving ISDN PRI calls and a TCP IP server. Everything received on the telephony side was simply bridged to the configured TCP IP server. The bridge supports PRI voice calls and V.110 data calls.

Even when the application is simple in nature, learning about V.110 to get it to work was interesting 🙂

Today I made the project public ( thanks to Tzury Bar Yochay from Regulus Labs) in google code:

Hopefully somebody else will find it useful.

Posted in C/C++, linux | 2 Comments

Debugging information in separate files

Debugging information in Linux ELF binaries is usually stored in the binary itself. This had been really convenient to me, for example, I always compile my openr2 library with -ggdb3 -O0. I don’t care about optimizations nor the increase in size in the binary and users can always change those flags using CFLAGS when configuring openr2. Is convenient because if my users ever get a core dump, I was able to jump right in and get a useful backtrace and examine the stack. Alternatively they could get the stack trace themselves and send it to me without worrying about anything else than launching gdb with the right arguments.

However, when you ship non-open source software or you’re just concerned with the size of all the debugging information in lots of libraries, you want to separate the debugging information from the binary holding the program/library itself. In Windows this is the default behavior you get with the well known PDB (Program Data Base) files. For Linux though, you need some tricks to get the debugging information separate. This is of course what most distributions do, they include an extra package with debugging information, so when you install a package you get just the binary code, then, if you need to debug it you download the debugging package.

If you ever need this, you can follow the instructions in this web page to get it to work:

The way I solved it for our internal build system is just to always compile with -ggdb3 and then:

1. Create a copy of the debugging symbols in a separate binary

objcopy --only-keep-debug

2. Remove the debugging information from the code binary.

objcopy --strip-debug

3. Add a reference to the code binary so gdb knows where to look for the debugging information

objcopy --add-gnu-debuglink

This last step is simply putting a file name reference inside the ELF binary so GDB (or some other debugger) knows which file name will have the debugging information for this .so (or an executable if that’s what you’re building). In the red hat web page more advanced techniques are explained to make sure you don’t end up with a version mismatch between the debugging information and your library or executable.

Posted in C/C++, linux | 1 Comment

GNU autotools tip

When you create a program or library for Linux you may need GNU auto tools (automake, libtool, configure etc) to detect environment settings. These tools may become a pain in the ass when you start with them (and probably later too). Something I recommend that has worked for me is to create a script like this:

libtoolize --force --copy
automake -f --copy --add-missing

The –force –copy for libtoolize and -f –copy –add-missing for automake will help you to not depend on symbolic links that are created by libtoolize and automake and that may not be present on the target machine where your code will be built.

I suppose there is a valid reason to not use those options, but for me, that saved me a lot of hassle.

Posted in linux | 1 Comment