Forum OpenACS Q&A: Anyone using AOLserver < 3.4 with nsperm module?

Maybe you already know it, but just in case, here is what I found in AOLserver 3.4 change log (http://aolserver.com/archive/server/ChangeLog.txt):
* nsd/serv.c: [ #232139 ] Buffer overflow in ParseAuth(). ParseAuth() uses a 256-byte buffer for decoding authentication data. Unfortunately, Ns_HtuuDecode() does not do an adequate job in preventing data overflow due to the way it tries to loop unroll 4-bytes per iteration. To reproduce: Using IE 5.xxx, navigate to a page that challenges (ie 401 return). Fill the user/pwd fields with as much data as possible. Hit return, and watch AOLserver die due to stack corruption (on Linux, at least). It is preferable to allocate the storage dynamically. Thanks to Adam Zell for the bug report and patch!!
If someone is using nsperm to block access to certain directories in his OpenACS installation (I used to do that for /doc) then maybe it is time to consider other options. By entering long (>100 characters) strings in the user and password fields in the access control window I managed to kill AOLserver. Tested on Linux 2.2.x with bunch of browsers (Netscape, Opera, Galeon). Of course aolserver would start again (thanks to inittab), but by hitting just 'reload' in the browser, I can kill it again. So my advice is: either do not use nsperm, or upgrade your AOLserver.
This bug opens the hole for a potential remote buffer overflow exploit.  It looks like it could be exploited regardless of whether or not you are using nsperm.  If the connection headers contain a "Authorization" header, thn ParseAuth routine is run, opening up your server to a remote buffer overflow.

Anybody running version 3.3 or less should consider upgrading because of the security risk.

Most of use are using AOLserver 3.2+AD12 I think. This is especially necessary for OpenACS 4. What can we do?
You could look at what was done for aolserver 3.4, and create a patch for  aolserver3.2+ad12.  You might also try contacting Rob Mayoff.  He's still maintaining the aD releases, and he might have already fixed the problem.
Collapse
5: Here's the patch (response to 1)
Posted by Marcin Bajer on
http://dreakmore.tigana.pl/aolserver/patches/serv.c.diff

I generated it by doing a diff on AOLserver CVS tree on Sourceforge. Tested against AOLserver 3.2ad12 with positive outcome. Author of the original bug report and patch is Adam Zell.

It looks like there is a subtle bug in the changed code:

--- 951,967 ----
              while (*q != '
so are you saying that this
!             decoded[n] = '
Don't forget that n is being reset by the call to Ns_HtuuDecode;
this might make it safe or it might be a worse problem, depends
on what that call is returning.
Janine is right. It looks like the Ns_HtuuDecode returns the number of bytes that were actually decoded.
what about changing 
!             decoded = ns_malloc(n);
to
!             decoded = ns_malloc(n+1);
Argh, all these answers would just be wrong if the wrong thing had not already been done within AOLserver. As the ChangeLog states,
Ns_HtuuDecode does not do an adequate job of preventing data overflow.
The fix isn't in serv.c, the fix is in htuu.c.

If you make this fix in serv.c, you will still leave NsTclHTUUDecodeCmd (ns_uudecode) broken, which would lead to the same buffer overflow. Oh. Except that someone already put the kluge (overmalloc the buffer) in there. Late breaking news: it appears the suggested fix still leaves AOLserver vulnerable, but maybe not since no one has reported it yet.

Still, what do you do about the next poor slob to call Ns_HtuuDecode?

My analysis shows two buffer overflows occuring due to bugs within Ns_HtuuDecode. One is an overflow of the output buffer by one or two bytes that may occur within Ns_HtuuDecode. The other is a potential overflow of the output buffer by an arbitrary number of bytes caused when Ns_HtuuDecode returns the wrong number of bytes allocated.

The suggested fix (mallocing the buffer on the fly) may still cause an AOLserver crash, presumably, your wiley h4x0r can just fill that input buffer up with a gig or two more data than your AOLserver process can malloc and the system will still die.

In fact, Ns_HtuuDecode looks as though it is written to decode a fixed amount of data at a time, and users of Ns_HtuuDecode are supposed to loop on the return value, decoding all the data in their buffer by repeatedly calling Ns_HtuuDecode with just a bit at a time and then copying it out.

The two users of this within AOLserver 3.2, don't do that though.

int
Ns_HtuuDecode(char * bufcoded, unsigned char * bufplain, int outbufsize)
{
 . . .

    int             nbytesdecoded, j;
    register char  *bufin = bufcoded;
    register unsigned char *bufout = bufplain;
    register int    nprbytes;

 . . .

    /*
     * Figure out how many characters are in the input buffer. If this would
     * decode into more bytes than would fit into the output buffer, adjust
     * the number of input bytes downwards.
     */
    bufin = bufcoded;
    while (pr2six[(int) *(bufin++)] <= MAXVAL);
    nprbytes = bufin - bufcoded - 1;
This looks odd but is correct, the number of bytes in the input buffer in bufin - bufcoded -1 as the while has run once too many times.
    nbytesdecoded = ((nprbytes + 3) / 4) * 3;
I am not sure what nytesdecoded is at this point. In the end, it is to be the number of bytes actually decoded. Does it represent that here, or is it just an interim value?

Whatever, here below is the check and correction to make sure all we only decode the right number of characters. Only the correction (or the while loop below) is wrong.

    if (nbytesdecoded > outbufsize) {
        nprbytes = (outbufsize * 4) / 3;
    }
    bufin = bufcoded;

    while (nprbytes > 0) {
        *(bufout++) = (unsigned char) (DEC(*bufin) << 2 | DEC(bufin[1]) >> 4);
        *(bufout++) = (unsigned char) (DEC(bufin[1]) << 4 |
				       DEC(bufin[2]) >> 2);
        *(bufout++) = (unsigned char) (DEC(bufin[2]) << 6 | DEC(bufin[3]));
        bufin += 4;
        nprbytes -= 4;
    }
So the loop runs through ceil(int(4/3*nprbytes)/4) times. Each time it reads 4 input bytes and creates 3 output bytes. Because of the way the loop is constructed, this appears to always overflow the buffer unless nprbytes is a multiple of 3.
n (nprbytes)int(4n/3)/4bytes writtenbytes overflowed
0000
10.2532
20.531
3130
41.2562
51.561
6260
It looks as though you could change the comparison to 2 and get the right behavior. From
    while (nprbytes > 0) {
To
    while (nprbytes > 2) {
But I don't think that's enough, because regardless of how many chars you actually decoded, the routine returns the number of bytes that would have been decoded if all the woodchucks had decoded all the bytes.
    if (nprbytes & 03) {
        if (pr2six[(int) bufin[-2]] > MAXVAL) {
            nbytesdecoded -= 2;
        } else {
            nbytesdecoded -= 1;
        }
    }
    return (nbytesdecoded);
}
If I didn't think I had pneumonia, I would probably try to fix this. As it is, I'm a bit too woozy to figure out the exact right thing to do. But, the fix is here, and not in serv.c.

If this analysis is wrong, I'd blame it on the cold medicine, but that's probably not true, rather I just fucked up the analysis. Happens.

I don't see the overflow.  The output buffer is allocated based on the input buffer size + 3.  In your example, you show an input of 1 byte generating three output bytes with an overflow of two bytes.  You're overlooking the fact that the output buffer was allocated with an extra three bytes that accounts for the extra two bytes generated by the decoding process.  There is also one byte left over to make room for a null terminating character.
If I understand what it's doing, well it's kind of a crazy interface, where the caller is directed to:
 *          bufplain    points to the output buffer; must be big
 *                      enough to hold the decoded string (generally
 *                      shorter than the encoded string) plus
 *                      as many as two extra bytes used during
 *                      the decoding process.
Say I do this "generally shorter than" calculation wrong and I pass in an output buffer THAT IS TOO SHORT.

Say I completely screw up. I pass in an input buffer that's one byte long and an output buffer that's one byte long and correctly tell the routine that the buffer is one byte long.

The routine determines that nbytesdecoded is 3, and finds nprbytes to be 1, and then determines it's okay to overwrite my buffer with two extra bytes.

My mistake yes, but it's obnoxious to respond with a buffer overflow, especially when I told the routine exactly how large the buffer was.

Anytime the buffer is in reality too short, you're going to get a buffer overflow, even when the routine has been told how the buffer actually is. I know the comment says to make sure it's big enough to hold the response plus two bytes, but that's obviously a problematical requirement (witness ParseAuth).

Since the routine has the actual size of the buffer, there is no need for the routine to overflow it. It shouldn't be the responsibility of the caller to ensure that memory isn't corrupted.

Besides, it still does nothing about the incorrect return of nbytesdecoded. Say I pass in an input buffer that's 400 characters, and an output buffer that's one character. This routine will tell me it decoded 300 characters. It will then overflow my output buffer by two bytes, and it will direct me to stomp 300 bytes downstream with a NULL. That's crazy as a soup sandwich!

It's clear that the intended interface was one where it would tell you how many bytes were decoded so that you could loop and get the rest decoded. None of the callers have implemented that interface.

But maybe I don't understand what it's doing.

Well ok.  I agree that the interface for Ns_HtuuDecode is not the best, but the problem that you're talking about doesn't exist in the patched version of ParseAuth.  It's always going to allocate enough bytes for the decoding.  I also think you're correct in stating that the Ns_HtuuDecode routine was meant to be called in a loop that processed the input buffer a chunk at a time.  This code was probably cut and pasted from something else, and the prospect of a buffer overflow was overlooked.

As you mentioned, the malloc logic should be modified to limit the size of a malloc,so that some wiley hax0r doesn't crash your server by sending you a large authentication header.

The ParseAuth and Ns_HtuuDecode routines could definitely use a little work.

but the problem that you're talking about doesn't exist in the patched version of ParseAuth
I agree, that's why I started by saying:
Argh, all these answers would just be wrong if the wrong thing had not already been done within AOLserver. As the ChangeLog states, Ns_HtuuDecode does not do an adequate job of preventing data overflow. The fix isn't in serv.c, the fix is in htuu.c. If you make this fix in serv.c, you will still leave NsTclHTUUDecodeCmd (ns_uudecode) broken, which would lead to the same buffer overflow. Oh. Except that someone already put the kluge (overmalloc the buffer) in there
The point being this is still just the wrong thing to do. I don't know when the next module is going to be written to call Ns_HtuuDecode, but that module is going to have to be written correctly, to a problematical interface, or risk crashing AOLserver and/or subjecting it to a security hole. I would prefer an architecture in which incorrectly written modules may not work correctly but are otherwise benign.
And it won't always allocate enough bytes for the decoding.  Granted when malloc fails you're already in a world of hurt, but this is just yet another routine that makes the gross assumption that malloc will never fail.

What's criminal about that is that if Ns_HtuuDecode were written correctly, then the original version of ParseAuth would behave much better in low memory situations than the new version of ParseAuth, and it would be faster too at almost all times, allocing a stack variable of 256 bytes as opposed to fragmenting memory and going through alloc, etc.  So one crapulent (actually a word) routine has begot another.

How about a very simple suggestion to cure the problem - comment out the body code of ParseAuth:
static void
ParseAuth(Conn *connPtr, char *auth)
{
#if 0
......
#endif
}

This of course disables any authorisation.

If you don't use authorisation and have not planned an upgrade to a later release of AOLserver then this gives a quick fix to the security problem.

This still leaves the problem in the TCL ns_uudecode function.

Andy