If I understand what it's doing, well it's kind of a crazy interface, where the caller is directed to:
* bufplain points to the output buffer; must be big
* enough to hold the decoded string (generally
* shorter than the encoded string) plus
* as many as two extra bytes used during
* the decoding process.
Say I do this "generally shorter than" calculation wrong and I pass in an output buffer THAT IS TOO SHORT.
Say I completely screw up. I pass in an input buffer that's one byte long and an output buffer that's one byte long and correctly tell the routine that the buffer is one byte long.
The routine determines that nbytesdecoded is 3, and finds nprbytes to be 1, and then determines it's okay to overwrite my buffer with two extra bytes.
My mistake yes, but it's obnoxious to respond with a buffer overflow, especially when I told the routine exactly how large the buffer was.
Anytime the buffer is in reality too short, you're going to get a buffer overflow, even when the routine has been told how the buffer actually is. I know the comment says to make sure it's big enough to hold the response plus two bytes, but that's obviously a problematical requirement (witness ParseAuth).
Since the routine has the actual size of the buffer, there is no need for the routine to overflow it. It shouldn't be the responsibility of the caller to ensure that memory isn't corrupted.
Besides, it still does nothing about the incorrect return of nbytesdecoded. Say I pass in an input buffer that's 400 characters, and an output buffer that's one character. This routine will tell me it decoded 300 characters. It will then overflow my output buffer by two bytes, and it will direct me to stomp 300 bytes downstream with a NULL. That's crazy as a soup sandwich!
It's clear that the intended interface was one where it would tell you how many bytes were decoded so that you could loop and get the rest decoded. None of the callers have implemented that interface.
But maybe I don't understand what it's doing.