Jump to content
Eternal Lands Official Forums
bluap

open_web_link() broken by recent change

Recommended Posts

Recently a change was committed to CVS for open_web_link() in misc.c. For non windows and OSX, the system() call was replaced with a fork()/execl(). This is very broken and completely kills X on my Linux box. First of, execl() is the bare bones exec that does not use a path, so the first parameter must be the full path to the exec, otherwise the command will fail; which is what happens for me. When it fails, because there is no error checking, we have two el client processes running. As they share a common heritage, things go horribly wrong; this is what kills X I believe.

 

First off, I added some error checking and attempted to kill the child process if the exec failed. This did not work for some reason, may be the exiting child el client, messes with X too much. I did get things to work by using execlp() the exec that does a path search; i.e. the browser launched OK. However, should it fail, I believe we could have problems again. To me using fork at all in this case is probably asking for trouble.

 

I realise why the change was made but don't think this is the solution. I'll put my thinking head on and see what I can suggest unless someone else fixes this in the mean time. :(

Share this post


Link to post
Share on other sites

I'll check into it. I'm not sure why this worked in my pre-commit tests -- although, I did have the full browser path specified in my config, so that might be it. Also, there's always the chance that the tests I did were done using an old binary; that's bitten me before.

 

This was to patch a critical security flaw with an easy exploit, so -- worst case -- I would say that disabling F2 is better than restoring it to it's old version. If nobody else gets to it first, I'll tackle this this evening.

 

Note: the solution is not to use system() and filter out more characters; you still run the risk of either a clever shell scripter beating you to the punch, or filtering out legit URLs. You should never have a shell interpreting user input, hence the switch to exec*. The proper solution should be to put the execl in brackets and put "exit(1);") after it inside the brackets. We should probably change to execlp as well. Execlp/execvp are still safe; they only use the shell to find where the binary is, not to run code.

Edited by KarenRei

Share this post


Link to post
Share on other sites

I noticed the code has been patched in much the same way as I tried yesterday. However, this still crashes my client horribly if the execl() fails. I've done a bit of checking and I believe the function _exit() should be used used instead of exit(). This does less clean up on exit and leaves the parent process intact properly.

 

Second, unless the parent does a wait() for the termination of a successful child process, the child process will become a zombie until the parent exits. This can be avoided by setting the SA_NOCLDWAIT signal flag. This will affect all child processes but that should be OK.

 

Below the code snipped I've successfully tested. This is a demonstration only. Obviously, the sigaction() stuff need only be done once at startup. Also the error reporting code should probably go to the ERROR_LOG and/or be use to provide some user feedback. I'll happily prepare a proper patch if people agree this is OK and someone else doesn't do it first.

 

 
	//if (fork() == 0) execl(browser_name, browser_name, url, NULL);
	struct sigaction act;
	memset(&act, 0, sizeof(act));
	act.sa_handler = SIG_DFL;
	act.sa_flags = SA_NOCLDWAIT;
	if (sigaction(SIGCHLD, &act, NULL) < 0)
	{
		perror("open_web_link() sigaction() failed");
	}
	else if (fork() == 0)
	{
		execlp(browser_name, browser_name, url, NULL);
		// if we get this far, the exec call has failed
		perror("open_web_link() execlp() failed");
		_exit(EXIT_FAILURE);
	}

Share this post


Link to post
Share on other sites

I realized the zombie would be there when I tried to reduce the chance of having problems, I was just trying to make that case less fatal. I didn't think the difference between exit() and _exit() would cause that many differences in this case.

Share this post


Link to post
Share on other sites

I realized the zombie would be there when I tried to reduce the chance of having problems, I was just trying to make that case less fatal. I didn't think the difference between exit() and _exit() would cause that many differences in this case.

When I tried to fix this yesterday with a call to exit(), the client crash was every bit as bad a for no exit(). I needed to reboot. With the _exit() call, the error condition is handled fine, no problems at all. I can only assume that the less thorough clean up leaves the parent process intact whereas the exit() call does some harm. The text I consulted recommended use of _exit() in all cases of failed exec() calls. The zombie is not as important but here is a way to fix it. Hope that helps. :D

Share this post


Link to post
Share on other sites

Then _exit() it is for now. But, I'd rather not do anything with signals just yet.My patching isn't trying for a full fix, I'm just aiming for less fatal.

Share this post


Link to post
Share on other sites

My testing confirms that the new code pulls up URLs correctly. Of course, my original testing didn't reveal a problem, so take that with a grain of salt. :D Still, the code looks good.

Share this post


Link to post
Share on other sites

My testing confirms that the new code pulls up URLs correctly. Of course, my original testing didn't reveal a problem, so take that with a grain of salt. :( Still, the code looks good.

The new code also is more careful on the URL capturing so that even if we had to go back to system(), I don't think it could cause injection very easily. There is a risk of some really strange URL's not working now, but better safe then sorry.

Share this post


Link to post
Share on other sites

if we want it to be safe from bad characters, we could replace

		if(j > sizeof(current_url)-2) {
		break; //URL too long, perhaps an exploit attempt
	}
	cur_char = source_string[i];
	if(!cur_char || cur_char == ' ' || cur_char == '\n' || cur_char == '<'
		|| cur_char == '>' || cur_char == '|' || cur_char == '"'
		|| cur_char == ']') {
		break;
	}
	current_url[j] = cur_char;
	j++;

with

cur_char = source_string[i];
if(cur_char <= 32 || cur_char == 127 || j >= sizeof(current_url)-4){
break;
} else if(cur_char == 35 || cur_char == 37 || cur_char == 38 || cur_char == 43 || cur_char == 45 || (cur_char >= 47 && cur_char <= 57) || cur_char == 61 || cur_char == 63 || (cur_char >= 65 && cur_char <= 90) || cur_char == 95 || (cur_char >= 97 && cur_char <= 122)){
current_url[j] = cur_char;
++j;
} else {
current_url[j] = '%';
current_url[1+j] = 'a'+cur_char/16;
current_url[2+j] = 'a'+cur_char%16;
j+=3;
}

and change cur_char from a 'char' to 'unsigned char'... it may mean some URLs aren't caught since encoding would have taken them over 160 chars, but encoding any char that isn't safe means no injections (white-listing is a lot more resistant to future changes or attacks than black-listing by encoding or break-ing on only what we think is dangerous today)

Share this post


Link to post
Share on other sites

We are safe from bad characters by using an exec* function, so it's not that important. I like the idea of only breaking on a few characters and escaping the rest instead of blocking them, though -- perhaps after feature freeze this would be good. If you do it, I wouldn't have it escape parens, though -- half of wikipedia's pages use them.

 

Heck, if you wanted to make it smarter, you could have it check to see if there are quotes around the URL, and if there are, only break when you get to the closing quote. That way, funky URLs with spaces won't require that the person posting them escape the %20. Of course, none of this is essential -- just niceties.

Edited by KarenRei

Share this post


Link to post
Share on other sites

and? :)

escaping chars is not unusual, and any webserver that can't handle them isn't worth much... I just tested, and wikipedia loaded the page fine using a %28 instead of a ( :P

Share this post


Link to post
Share on other sites

Oh, it'll certainly load fine -- I didn't mean to imply otherwise. It was just a note for the clarity of the URL. I mean, technically, you could escape almost character. It'd just make for ugly URLs :)

Share this post


Link to post
Share on other sites

Then _exit() it is for now. But, I'd rather not do anything with signals just yet.My patching isn't trying for a full fix, I'm just aiming for less fatal.

You probably already realise this but a simple alternative to messing with signals would be a none blocking waitpid() placed in the outer rim of the the main loop of the client. You'd have to use the "wait for any child process" option but there's not a lot else you can do as its possible to launch multiple processes by configuring you browser option or if the execpl() fails.

waitpid(-1, NULL, WNOHANG);

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.

×