Closing files after fork

Hal Murray halmurray at
Thu Aug 26 10:09:48 UTC 2021


> It isn't hard to put your question into a search and get useful answers as to
> why this is a good practise (and also why it's crufty if you have to support
> more than just Linux or any other single system). 

I poked around a bit and didn't find anything useful.

I found a web page that discussed closing everything before doing an exec.  
That's not our case.

A comment had a URL for how to do it, but no mention of why.  Yes, the 
portable code is messy.

> TL;DR; A forked process inherits all open files from the parent and that
> implies access to the resource behind the descriptor with the rights (or in
> the context) of the parent process.  To properly restrict the new process,
> you need to close anything that the child would not have access to in the new
> context and/or the child doesn't actually need.  A daemon also needs to close
> STDIN, STDOUT and STDERR (fd 0, 1, 2).  But a daemon dropping privileges
> usually also depends on some resources that only the parent has access to, so
> simply closing all fd isn't going to work. 

I think I understand the idea, but I'm missing something.

Where are the files that should be closed coming from?

Did somebody start ntpd with extra files opened by root?  If so, we have other more interesting problems.

Are we trying to double check that we forgot to close a file?  Then the close-everything code should be as late as possible rather than early in startup.

The close-everything code either has to skip the files it needs or reopen them.  That requires keeping track of the files it needs which isn't something most programmers do.  In our case, that was buggy.

These are my opinions.  I hate spam.

More information about the devel mailing list