The Unix tools philosophy: The Big Lie or The Big Missed Opportunity?

The Unix tools philosophy is so well known and oft-repeated that it has become mantra to many. As with so many habitual, memorized responses, like blessing someone after a sneeze, we rarely think deeply about the words said, and what they really mean. We don’t notice that while we praise this approach, we almost never use it for anything serious.

The Unix Philosophy

The first reference I heard of to the Unix philosophy was in the seminal 1978 Bell Systems Technical Journal special edition on Unix. (Appropriately, if you go to the Bell Labs. Lucent, Alcatel (?) web site to subscribe to this journal, you’ll find it has literally gone 404). Again and again, this philosophy has been repeated. I’ve seen it most recently on Duncan Davidson’s Web is a pipe blog entry. Sadly, his otherwise insightful post has an example that does not demonstrate the value of pipes, but rather a Useless use of cat.
The Unix philosophy, is described quite well on wikipedia, attributed to Doug McIllroy:

This is the Unix philosophy:

Write programs that do one thing and do it well.

Write programs to work together.

Write programs to handle text streams, because that is a universal interface.

The Big Lie

This is the Motherhood and Apple Pie of programming. It sounds great in some idealized world, but we rarely see it in our world. Think of the Unix applications you spend the most time using every day. For me, it is something like compiler, editor, web browser, web server, database server. None of these are built from a large set of small tools hooked up with pipes transmitting ASCII data. Not one single application. Yes, I know about gcc and the -pipe option, but this doesn’t count — gcc doesn’t read standard input, and the five (or so) stages of the compiler hardly qualify as small, reusable components. Historical tidbit: In the 2BSD days, the C compiler’s optimizer was implemented as an awk script that ran over the assembler input. But now that we expect our compilers to do more than peephole optimizations, we don’t do this kind of thing anymore.

Now, I use the Unix command line every day, and frequently string together shell one-liners. I build histograms with
sort | uniq -c | sort -n

idiomatically. But I don't build large applications this way. Almost no one does. Does this mean that the Unix philosophy only applies to interactive shell programming? Or perhaps to that and one level higher, smallish shell scripts (but certainly nothing the size of a C compiler)? Or was the intention of the Unix designers that their machines, tiny by today's standards would mainly run small shell scripts? I would love to ask Thompson, Richie, or any of the other Bell Labs folks these questions. It is ironic that we are more concerned/obsessed with performance on today's machines that are thousands of times faster and cheaper than the PDP-11s Unix was born on.

Solutions to the dilemma

One "solution" is to admit defeat, and say that the Unix Philosophy doesn't really work, at least not on big programs or anything more complicated than a shell one-liner. After all, most Windows users, even power-users, never use command pipelines. Instead of the three command pipeline above to create a histogram, a Windows power user would probably load the data up in Excel, and go from there. Maybe that isn't as elegant, but it gets the job done just as easily as the shell command. If this is your solution to this dilemma, I would ask you to at least be honest about your retreat -- if you don't build programs this way, stop proclaiming its superiority. Please. There are plenty of sucessful projects in the other camp, and they are up-front about violating all three tenets.

Another solution is to claim that the problem is that the unidirectional ASCII streams aren't structured enough, and to add some more structure and state, and create a full-blown RPC mechanism. In terms of the Unix Philosophy quoted above, we remove the third leg of the tripod (Write programs to handle text streams, because that is a universal interface). Once this is done, the sin of premature optimization is inevitably commited, for there is the fear that bundling each component into a separate Unix process will lead to performance problems. So, the word "program" is struck out of the philosophy, and replaced with the word "component", with convenient vagueness. Once these changes are made, we've pretty much given up on the validity or usefulness of the original philosophy.

This is the path that SunRPC, CORBA, DCOM, and XPCOMM take. While each of these have been successful within limited domains, we don't see any of them used as toolkits to quickly build many, diverse applications by non-experts. Rather, they are used to glue together predefined puzzle pieces in predefined ways. For example, as Firefox is composed of many XPCOMM components, why can't I break apart the components, and quickly build, say, an IM client out of those components? Ho-ho, I hear you say, what about Chatzilla? Chatzilla is built from over a dozen Javascript files by an expert javascript/XUL developer. I would categorize this as an exception that proves the rule -- if it really were easy to build this up, there would be dozens of them. Especially the first three component techologies mentioned above, make it hard to connect up various components, especially those which haven't been planned to interoperate a-priori. The Unix philosophy stresses easy of connectability.

Why not try?

A wise man once told me, "If there's one thing I've learned in six years of graduate study in Computer Science, it is the value of a Unix process". So, why not try it, and build real, large applications out of smaller unix processes connected by pipes? There's even precedent -- the applications that Dan Bernstein has written, like qmail and djbdns. These systems, primarily designed for security, are composed of many processes which communicate via ASCII stream over pipes. And guess what? qmail is secure, gives good performance, and there are many add-ons and replacements for its various components. Maybe, in this age when folks worry about how to effectively use quad-core CPUs, just maybe the graybeards knew a thing or two after all.

About these ads
This entry was posted in One Step Forward. Bookmark the permalink.

8 Responses to The Unix tools philosophy: The Big Lie or The Big Missed Opportunity?

  1. denis bider says:

    You seem to bundle together Unix pipes with RPC, CORBA and COM as if these architectures had any major similarities. They are essentially different in that the Unix pipes are asynchronous (data-flow based), whereas RPC, CORBA and COM are synchronous (function-call based). The non-reusability of RPC-based architectures stems from their synchronous design.

    I think the Unix pipes idea was great but somewhat underdeveloped. Its major issues are that (A) it provides only one input pipe and only two output pipes, which is inflexible; and (B) there is no seamless data encoding and decoding facility so each pair of processes has to explicitly take care of this, which could be tedious and probably doesn’t help productivity.

    Improved examples of data-flow-based architecture could be Flow-Based Programming, Erlang, and Microsoft’s Singularity research project. In each case the fundamental element of a system is a component which, like a Unix process, does a single, simple thing well; but the interactions between components are more structured, more flexible and better defined.

  2. Greg Thain says:

    My main question is this:

    Is the Unix tools approach useful for anything more than shell one-liners?

    If not, that’s fine, but just be up front about that when promoting it.

    If it is more useful, and I believe so, let’s start using it more.

    You are correct in that the implementation architectures of the various RPC mechanisms are very different from pipes, but the intent, generally, is the same — to provide a component mechanism.

  3. Drew K says:

    So, why not try it, and build real, large applications out of smaller unix processes connected by pipes?

    Building large programs isn’t the only thing programmers do. For a lot of people who don’t work in the shrinkwrapped software industry, it’s probably not even the main thing they do.

    What they do is solve problems. They crunch data from new sources into existing formats, and output custom reports from existing data. Things you can do with pipes and small pieces.

    Large programs are good for encapsulating large blocks of logic to be reused by people who may not know or understand that logic. Small pieces piped together are good for expert users who keep getting better at doing new things on a short timeline.

  4. Adam Keys says:

    Great contrarian piece. I thought I was going to have to write a bazillion words as a response, but that wouldn’t get read. So I wrote you two haikus:

    Unix way is not
    panacea; but it is
    handy when you can

    Pipes are components
    For those who eschew up front
    design; just bang rocks

  5. Kevin Marvin says:

    Limericktastic reply!

    With unix command line script-fu,
    an admin will know what to do,
    but for our big old app
    with database and crap,
    we all know the command line wont do.

  6. Adam Keys says:

    Well played, Mr. Marvin. Well played.

  7. Kevin Marvin says:

    Eh, I shouldn’t have used Do twice to end the rhyme. Not my best work.

  8. Greg Thain says:

    I thought I was going to have to write a bazillion words as a response, but that wouldn’t get read.

    Don’t be too sure that it wouldn’t get read.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s