Unix哲学基础

--还未完成 -- 

Basics of the Unix Philosophy

Unix 哲学起源于Ken Thompson早年对于“如何设计一个小的,但是却很有用的操作系统,并且带有清晰的服务接口"的思考。Unix哲学的很多内容都是从Thompson的设计中而来,同样也有 也从其它方面吸收的东西。

Unix 哲学不是一个正式的设计方法。他并没有继承“理论计算机科学”的对于创建一个“学术上”完美的软件这种 妄想。同样他也不是什么 “神奇的魔法”,能够把新思想快速解压到那些缺乏进取心的、管理混乱的、不发工资的程序员身上。

Unix 哲学(和其他程序设计的定律一样)是“自下而上”的,不是“自上而下”。 这是一种务实和踏实的经验。这个不是在官方的方法和准则中发现的,而是一个被动的知识,一种在unix文化中实践的“常识”。他推崇 “判断力和怀疑力” - 并且表达出这两者以一个诙谐的(通常是颠覆性)的方式。

Doug McIlroy Unix pipe的创建者,同样他也是Unix风格的创始人,这样说过(1978年):

1、让每个程序块做好一件事。要做一个新的任务时,做一个新的程序块,而不是在以前的程序里加功能。
2、假定一个程序块output的内容是另外一个程序块input的内容,同样的,适用于2个完全无关的程序。
不要输出了一些无关信息污染output。不要对于input有严格的列或者binary格式限制。不要执着于交互式的输入。
3、设计和开发程序块,甚至操作系统,越早越好,最好是几周之内。不要犹豫砍掉那些垃圾部分,然后重构。
4、使用一些工具辅助开发,用来减轻编程压力,甚至是有可能你必须绕道去开发一个工具而且你认为这个工具开发完这个程序中就不会在用了

他后来是这样描述的(发表在Unix25年):

这就是Unix哲学: 编写的程序只做一件事情而且做好它。编写能够相互合作的程序,编写程序能够处理text streams, 因为 streams才是通用的接口。


Rob Pike,他后来是C的大师,提供了一个有些不同的观点,关于编写C程序。

Rule 1. 你不能对程序花的时间妄下结论。瓶颈通常出现在意想不到的地方,所以不要去猜测,然后写一段“优化代码”,直到你有足够的证据证明是哪里出现瓶颈的。

Rule 2. 亲自测量。不要随意"优化代码性能",除非你亲自测量过。即便你测量过了,也只有在这段代码完全超过其它代码的情况下才去修改。

Rule 3. 梦境般的算法会比较慢,当n较小的时候,而且n通常是很小的。梦境般的算法通常有很多常量。所以,如果只要不是n通常会变得很大的情况下,不要用梦幻算法。

Rule 4. 梦境般的算法通常会比简单算法多出bug,而且他们更难被实现。所以,使用简单算法和简单的结构。

Rule 5. 数据驱动。如果你选择了正确的数据格式,而且组织的很好,那么算法会通常显而易见。数据结构,而不是算法,才是编程的核心。

Rule 6. 没有第六条


Ken Thompson,他设计和实现了最初的Unix,丰富了Pike说的4条规则,言简意赅。

如果自己出现了困惑,请使用暴力。

大多数Unix philosophy不是通过长辈们说的话而总结的,而是通过他们如何开发Unix和配置Unix而总结的。总结成下面的观点:

  • 模块化:编写清晰的能够互相连接的模块。
  • 代码清晰:清晰比取巧要好。
  • 组合:设计可以和其他部分连接的程序。
  • 分离:模块的职责要从这个部件中分离出来,界面要和程序引擎独立开来。
  • 简单:尽量简单,增加复杂度只有在你必须的时候。
  • 吝啬:写一个大程序,只有在它被证明不做一些出格的事情的时候。
  • 公开透明:写程序容易debug和查看源码。
  • 健壮:健壮的程序往往是公开透明的和简单的。
  • 表现力:将信息表现在数据中,这样程序的逻辑会变得直观和耐久。
  • 没有惊讶:设计接口时,做最少的让人惊讶的事情。
  • 闭嘴:如果你写的程序没有值得炫耀的,请不说话。
  • 修复:如果已经到了无法抢救的地步,请及时告知。
  • 环保: 程序员的时间,相比机器时间,是极其昂贵的。
  • 迭代:避免手写;编写一些代码生成工具,力所能及的话。
  • 优化:现总结原型,不要急着打磨。先实现功能再优化。
  • 多元化:永远怀疑有人说“这是唯一的办法”。
  • 可扩展:为了明天而设计,因为明天总会来的比你想象的快。

如果你是一个新人,这些准则是值得思考的,软件研发白皮书推荐上面的大部分。但是大部分人缺乏正确的工具和习惯去把这些付诸实践,所以大部分程序员都无法坚持下来。他们慢慢的把愚蠢的工具,坏的设计,重复摇动,臃肿的代码当作习惯 -- 然后忘记了Unix fans应该做的。


  • 模块化:编写清晰的能够互相连接的模块。

Brian Kernighan曾经提到,“控制复杂程度是编程的精髓”[Kernighan-Plauger]。debug的时间多余开发时间,而且发布一款系统通常不像是一个精巧的设计过后的,因为它要解决太多太多问题,不让自己被绊倒。

汇编程序,编译器,流程图,面向过程编程,结构化编程,“人工智能”,四代编程语言,面向对象,和 不需要number的编程(译者:不懂,可能是functional programming), 都被用来出售,作为解决问题的方法。这些作为解药都失败了,如果说有成功,也是因为他们封装了那些人脑处理不了的复杂内容。就如 Fred书中所提到的[Brooks],没有捷径。

编写复杂软件的唯一方法,就是控制软件的复杂程度 — 构建一写简单的部分,然后把它们连接起来(通过定制的接口规则),然后所有的问题被分成小问题,然后你可以对某个部分进行升级而不影响整体。


  • 代码清晰:清晰比取巧要好。

因为维护代码的人非常重要,而且维护成本很高,编写代码的时候最为值得注意的是 不是写给电脑运行的,而是写给未来的人阅读和维护的(包括你自己)。

Unix的传统中,收到启发的建议是要写代码注释。Unix最佳实践鼓励人们选择那些 未来容易维护的算法。为了一点点性能的提升而增加了很大一笔复杂程度和晦涩难懂的声明 是一笔糟糕的交易。不仅仅是因为复杂的代码会有隐含bug,而且复杂的代码会在未来更难被维护者所阅读。

写的代码优雅和清晰,在另一方面,会更少出现问题 — 而且下一个人接手的时候会更快的理解和更改它。这非常重要,特别是当接手人在几年之后正如现在的自己。

不要纠结于破译“机智的代码”三次。一次可能是侥幸命中,但是如果你发现自己可能下次还得思考为什么这么写 — 因为上次距离太久远所以你早忘掉了细节 — 所以赶快写下注释防止第三次经历这种痛苦。    -- Henry Spencer 


  • 组合:设计可以和其他部分连接的程序。

如果你写的程序互相不能交流,那么你的程序不可避免的出现过度复杂的障碍。

Unix传统推荐编写程序 读和写简单的、文本格式的、stream为主的、通用格式的内容。在经典Unix下,像很多程序那样写简单的过滤器,它接受一个简单的text stream 输出一个简单的 text stream。

在那时很流行,这样做不是因为Unix程序员讨厌GUI界面。因为如果不写成这样(接收和输入简单的text stream),很难把不同程序给组装工作起来。

Text stream 对于 Unix工具来说就像是 对象 在面向对象编程中。因为 text-stream很简单,所以强迫那些工具做好封装。更加精巧的跨进程交流工具,比如rpc(remote procedure calls), 显示出了不同程序 相互干扰的太多了。

为了使程序变得composable,首先得让他们独立。一个程序在一端接受text-stream几乎不会收到另一端的任何影响。 而且,把一端完全替换成为别的内容也不会影响其他程序。GUI可以变成一个很好的东西。但是复杂的二进制内容有时候不可避免的 成为了无法 显示在界面的内容。但是在写到GUI之前,你最好看下能否将这个复杂的程序分成2部分,一部分复杂实现复杂的计算逻辑,另一部分接受一些 commad stream。在你发明一些”机智的二进制数据格式“之前,值得一看的是你是否做一下text格式的工作,然后向前解析一点点,取得文件格式,而不是用一些特别的工具hack整个 data stream。

当一个序列化的,协议式的接口不适合这个程序的时候,正确的Unix 设计方式是至少组织一个库存放大量的基本数据,通过一个设计好的API。这让程序可以被一个链接调用,或者多个接口会粘上它,用作不同任务。



  • 分离:模块的职责要从这个部件中分离出来,界面要和程序引擎独立开来。


在对于Unix哪里做错了的讨论中,我们发现设计X的人决定实现功能而不是策略, —为了使X变成一个 通用的图像引擎,然后提供一些操作ui的方法和其它层次对于系统调用的方法。我们指出,这种功能会随着时间的改版而改变,相对于策略来说,这种改变更为频繁。GUI和样式经常会改变,但是光栅操作和渲染是不变的。

所以,硬连策略和功能在一起有2个坏处:它使得功能很难针对用户需求改版,而且改版功能很有可能使得策略也改变。另一方面,分离上面2个使得我们能试着使用新的功能,在策略不变的情况下。我们也可以更容易针对策略编写测试(功能,因为它们变化的太快,我们无法衡量)。

这样做的方法是被广泛的在GUI以外所采用的。总结一下,我们应当总是将界面和引擎分离开来。

一个这样做的效果是,举个例子,编写一个C程序的服务,这个服务是被脚本语言所调用的,脚本语言控制整个程序的流程。一个典型的例子是Emacs编辑器,它用lisp作为脚本语言,解释成为c语言能够运行的方式。

另外一种方式就是分离前端和后端的进程,他们通过socket进行通信,我们会在第5章和第7章讨论它。前端实现功能,后端实现策略。这样整体的复杂度会比单进程(实现所有功能)的程序大大降低,bug产生的可能性减少,而且迭代成本也降低了。


Rule of Simplicity: Design for simplicity; add complexity only where you must.


  • 简单:尽量简单,增加复杂度只有在你必须的时候。


Many pressures tend to make programs more complicated (and therefore more expensive and buggy). One such pressure is technical machismo. Programmers are bright people who are (often justly) proud of their ability to handle complexity and juggle abstractions. Often they compete with their peers to see who can build the most intricate and beautiful complexities. Just as often, their ability to design outstrips their ability to implement and debug, and the result is expensive failure.

各种各样的压力使得程序变得更加复杂(因而成本变高而且bug增加)。一种常见的是个人英雄主义。程序员是非常聪明的(这理所应当)而且很自豪,当能够处理复杂逻辑和抽象的时候。通常同事之间会比一比谁能够处理最复杂的逻辑。所以通常设计复杂程序的能力带来实施困难和bug多的可能性,这也可能会导致昂贵的失败。

The notion of “intricate and beautiful complexities” is almost an oxymoron. Unix programmers vie with each other for “simple and beautiful” honors — a point that's implicit in these rules, but is well worth making overt.

所谓“复杂和美丽的复杂性”简直是自相矛盾。Unix程序员以“简单和美”为荣 -- 一个隐含的观点,在这些rules里,但是非常值得公开。 -- Doug McIlroy 

Even more often (at least in the commercial software world) excessive complexity comes from project requirements that are based on the marketing fad of the month rather than the reality of what customers want or software can actually deliver. Many a good design has been smothered under marketing's pile of “checklist features” — features that, often, no customer will ever use. And a vicious circle operates; the competition thinks it has to compete with chrome by adding more chrome. Pretty soon, massive bloat is the industry standard and everyone is using huge, buggy programs not even their developers can love.

通常(至少是一些商业软件)从项目需求那里过度的增加了复杂度。这些需求 通常是基于一时的市场调查 而不是明白了用户真正想的和软件实际能够提供的。许多设计师 身处一个窒息的“checklist”下面,这个list有着,通常 没有用户会用的功能。然后,一个狂暴的循环产生了,竞争者认为如果要完成电影必须增加更多胶卷。很快


Either way, everybody loses in the end.


The only way to avoid these traps is to encourage a software culture that knows that small is beautiful, that actively resists bloat and complexity: an engineering tradition that puts a high value on simple solutions, that looks for ways to break program systems up into small cooperating pieces, and that reflexively fights attempts to gussy up programs with a lot of chrome (or, even worse, to design programs around the chrome).


That would be a culture a lot like Unix's.


Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.


‘Big’ here has the sense both of large in volume of code and of internal complexity. Allowing programs to get large hurts maintainability. Because people are reluctant to throw away the visible product of lots of work, large programs invite overinvestment in approaches that are failed or suboptimal.


(We'll examine the issue of the right size of software in more detail in Chapter 13.)


Rule of Transparency: Design for visibility to make inspection and debugging easier.


Because debugging often occupies three-quarters or more of development time, work done early to ease debugging can be a very good investment. A particularly effective way to ease debugging is to design for transparency and discoverability.


A software system is transparent when you can look at it and immediately understand what it is doing and how. It is discoverable when it has facilities for monitoring and display of internal state so that your program not only functions well but can be seen to function well.


Designing for these qualities will have implications throughout a project. At minimum, it implies that debugging options should not be minimal afterthoughts. Rather, they should be designed in from the beginning — from the point of view that the program should be able to both demonstrate its own correctness and communicate to future developers the original developer's mental model of the problem it solves.


For a program to demonstrate its own correctness, it needs to be using input and output formats sufficiently simple so that the proper relationship between valid input and correct output is easy to check.


The objective of designing for transparency and discoverability should also encourage simple interfaces that can easily be manipulated by other programs — in particular, test and monitoring harnesses and debugging scripts.


Rule of Robustness: Robustness is the child of transparency and simplicity.


Software is said to be robust when it performs well under unexpected conditions which stress the designer's assumptions, as well as under normal conditions.


Most software is fragile and buggy because most programs are too complicated for a human brain to understand all at once. When you can't reason correctly about the guts of a program, you can't be sure it's correct, and you can't fix it if it's broken.


It follows that the way to make robust programs is to make their internals easy for human beings to reason about. There are two main ways to do that: transparency and simplicity.


 

For robustness, designing in tolerance for unusual or extremely bulky inputs is also important. Bearing in mind the Rule of Composition helps; input generated by other programs is notorious for stress-testing software (e.g., the original Unix C compiler reportedly needed small upgrades to cope well with Yacc output). The forms involved often seem useless to humans. For example, accepting empty lists/strings/etc., even in places where a human would seldom or never supply an empty string, avoids having to special-case such situations when generating the input mechanically.


 

-- Henry Spencer 

One very important tactic for being robust under odd inputs is to avoid having special cases in your code. Bugs often lurk in the code for handling special cases, and in the interactions among parts of the code intended to handle different special cases.


We observed above that software is transparent when you can look at it and immediately see what is going on. It is simple when what is going on is uncomplicated enough for a human brain to reason about all the potential cases without strain. The more your programs have both of these qualities, the more robust they will be.


Modularity (simple parts, clean interfaces) is a way to organize programs to make them simpler. There are other ways to fight for simplicity. Here's another one.


Rule of Representation: Fold knowledge into data, so program logic can be stupid and robust.


Even the simplest procedural logic is hard for humans to verify, but quite complex data structures are fairly easy to model and reason about. To see this, compare the expressiveness and explanatory power of a diagram of (say) a fifty-node pointer tree with a flowchart of a fifty-line program. Or, compare an array initializer expressing a conversion table with an equivalent switch statement. The difference in transparency and clarity is dramatic. See Rob Pike's Rule 5.


Data is more tractable than program logic. It follows that where you see a choice between complexity in data structures and complexity in code, choose the former. More: in evolving a design, you should actively seek ways to shift complexity from code to data.


The Unix community did not originate this insight, but a lot of Unix code displays its influence. The C language's facility at manipulating pointers, in particular, has encouraged the use of dynamically-modified reference structures at all levels of coding from the kernel upward. Simple pointer chases in such structures frequently do duties that implementations in other languages would instead have to embody in more elaborate procedures.


(We also cover these techniques in Chapter 9.)


Rule of Least Surprise: In interface design, always do the least surprising thing.


(This is also widely known as the Principle of Least Astonishment.)


The easiest programs to use are those that demand the least new learning from the user — or, to put it another way, the easiest programs to use are those that most effectively connect to the user's pre-existing knowledge.


Therefore, avoid gratuitous novelty and excessive cleverness in interface design. If you're writing a calculator program, ‘+’ should always mean addition! When designing an interface, model it on the interfaces of functionally similar or analogous programs with which your users are likely to be familiar.


Pay attention to your expected audience. They may be end users, they may be other programmers, or they may be system administrators. What is least surprising can differ among these groups.


Pay attention to tradition. The Unix world has rather well-developed conventions about things like the format of configuration and run-control files, command-line switches, and the like. These traditions exist for a good reason: to tame the learning curve. Learn and use them.


(We'll cover many of these traditions in Chapter 5 and Chapter 10.)


 

The flip side of the Rule of Least Surprise is to avoid making things superficially similar but really a little bit different. This is extremely treacherous because the seeming familiarity raises false expectations. It's often better to make things distinctly different than to make them almost the same.


 

-- Henry Spencer 

Rule of Silence: When a program has nothing surprising to say, it should say nothing.


One of Unix's oldest and most persistent design rules is that when a program has nothing interesting or surprising to say, it should shut up. Well-behaved Unix programs do their jobs unobtrusively, with a minimum of fuss and bother. Silence is golden.


This “silence is golden” rule evolved originally because Unix predates video displays. On the slow printing terminals of 1969, each line of unnecessary output was a serious drain on the user's time. That constraint is gone, but excellent reasons for terseness remain.


 

I think that the terseness of Unix programs is a central feature of the style. When your program's output becomes another's input, it should be easy to pick out the needed bits. And for people it is a human-factors necessity — important information should not be mixed in with verbosity about internal program behavior. If all displayed information is important, important information is easy to find.


 

-- Ken Arnold 

Well-designed programs treat the user's attention and concentration as a precious and limited resource, only to be claimed when necessary.


(We'll discuss the Rule of Silence and the reasons for it in more detail at the end of Chapter 11.)


Rule of Repair: Repair what you can — but when you must fail, fail noisily and as soon as possible.


Software should be transparent in the way that it fails, as well as in normal operation. It's best when software can cope with unexpected conditions by adapting to them, but the worst kinds of bugs are those in which the repair doesn't succeed and the problem quietly causes corruption that doesn't show up until much later.


Therefore, write your software to cope with incorrect inputs and its own execution errors as gracefully as possible. But when it cannot, make it fail in a way that makes diagnosis of the problem as easy as possible.


Consider also Postel's Prescription:[10] “Be liberal in what you accept, and conservative in what you send”. Postel was speaking of network service programs, but the underlying idea is more general. Well-designed programs cooperate with other programs by making as much sense as they can from ill-formed inputs; they either fail noisily or pass strictly clean and correct data to the next program in the chain.


However, heed also this warning:


 

The original HTML documents recommended “be generous in what you accept”, and it has bedeviled us ever since because each browser accepts a different superset of the specifications. It is the specifications that should be generous, not their interpretation.


 

-- Doug McIlroy 

McIlroy adjures us to design for generosity rather than compensating for inadequate standards with permissive implementations. Otherwise, as he rightly points out, it's all too easy to end up in tag soup.


Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.


In the early minicomputer days of Unix, this was still a fairly radical idea (machines were a great deal slower and more expensive then). Nowadays, with every development shop and most users (apart from the few modeling nuclear explosions or doing 3D movie animation) awash in cheap machine cycles, it may seem too obvious to need saying.


Somehow, though, practice doesn't seem to have quite caught up with reality. If we took this maxim really seriously throughout software development, most applications would be written in higher-level languages like Perl, Tcl, Python, Java, Lisp and even shell — languages that ease the programmer's burden by doing their own memory management (see [Ravenbrook]).


And indeed this is happening within the Unix world, though outside it most applications shops still seem stuck with the old-school Unix strategy of coding in C (or C++). Later in this book we'll discuss this strategy and its tradeoffs in detail.


One other obvious way to conserve programmer time is to teach machines how to do more of the low-level work of programming. This leads to...


Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.


Human beings are notoriously bad at sweating the details. Accordingly, any kind of hand-hacking of programs is a rich source of delays and errors. The simpler and more abstracted your program specification can be, the more likely it is that the human designer will have gotten it right. Generated code (at every level) is almost always cheaper and more reliable than hand-hacked.


We all know this is true (it's why we have compilers and interpreters, after all) but we often don't think about the implications. High-level-language code that's repetitive and mind-numbing for humans to write is just as productive a target for a code generator as machine code. It pays to use code generators when they can raise the level of abstraction — that is, when the specification language for the generator is simpler than the generated code, and the code doesn't have to be hand-hacked afterwards.


In the Unix tradition, code generators are heavily used to automate error-prone detail work. Parser/lexer generators are the classic examples; makefile generators and GUI interface builders are newer ones.


(We cover these techniques in Chapter 9.)


Rule of Optimization: Prototype before polishing. Get it working before you optimize it.


The most basic argument for prototyping first is Kernighan & Plauger's; “90% of the functionality delivered now is better than 100% of it delivered never”. Prototyping first may help keep you from investing far too much time for marginal gains.


For slightly different reasons, Donald Knuth (author of The Art Of Computer Programming, one of the field's few true classics) popularized the observation that “Premature optimization is the root of all evil”.[11] And he was right.


Rushing to optimize before the bottlenecks are known may be the only error to have ruined more designs than feature creep. From tortured code to incomprehensible data layouts, the results of obsessing about speed or memory or disk usage at the expense of transparency and simplicity are everywhere. They spawn innumerable bugs and cost millions of man-hours — often, just to get marginal gains in the use of some resource much less expensive than debugging time.


Disturbingly often, premature local optimization actually hinders global optimization (and hence reduces overall performance). A prematurely optimized portion of a design frequently interferes with changes that would have much higher payoffs across the whole design, so you end up with both inferior performance and excessively complex code.


In the Unix world there is a long-established and very explicit tradition (exemplified by Rob Pike's comments above and Ken Thompson's maxim about brute force) that says: Prototype, then polish. Get it working before you optimize it. Or: Make it work first, then make it work fast. ‘Extreme programming' guru Kent Beck, operating in a different culture, has usefully amplified this to: “Make it run, then make it right, then make it fast”.


The thrust of all these quotes is the same: get your design right with an un-optimized, slow, memory-intensive implementation before you try to tune. Then, tune systematically, looking for the places where you can buy big performance wins with the smallest possible increases in local complexity.


 

Prototyping is important for system design as well as optimization — it is much easier to judge whether a prototype does what you want than it is to read a long specification. I remember one development manager at Bellcore who fought against the “requirements” culture years before anybody talked about “rapid prototyping” or “agile development”. He wouldn't issue long specifications; he'd lash together some combination of shell scripts and awk code that did roughly what was needed, tell the customers to send him some clerks for a few days, and then have the customers come in and look at their clerks using the prototype and tell him whether or not they liked it. If they did, he would say “you can have it industrial strength so-many-months from now at such-and-such cost”. His estimates tended to be accurate, but he lost out in the culture to managers who believed that requirements writers should be in control of everything.


 

-- Mike Lesk 

Using prototyping to learn which features you don't have to implement helps optimization for performance; you don't have to optimize what you don't write. The most powerful optimization tool in existence may be the delete key.


 

One of my most productive days was throwing away 1000 lines of code.


 

-- Ken Thompson 

(We'll go into a bit more depth about related ideas in Chapter 12.)


Rule of Diversity: Distrust all claims for “one true way”.


Even the best software tools tend to be limited by the imaginations of their designers. Nobody is smart enough to optimize for everything, nor to anticipate all the uses to which their software might be put. Designing rigid, closed software that won't talk to the rest of the world is an unhealthy form of arrogance.


Therefore, the Unix tradition includes a healthy mistrust of “one true way” approaches to software design or implementation. It embraces multiple languages, open extensible systems, and customization hooks everywhere.


Rule of Extensibility: Design for the future, because it will be here sooner than you think.


If it is unwise to trust other people's claims for “one true way”, it's even more foolish to believe them about your own designs. Never assume you have the final answer. Therefore, leave room for your data formats and code to grow; otherwise, you will often find that you are locked into unwise early choices because you cannot change them while maintaining backward compatibility.


When you design protocols or file formats, make them sufficiently self-describing to be extensible. Always, always either include a version number, or compose the format from self-contained, self-describing clauses in such a way that new clauses can be readily added and old ones dropped without confusing format-reading code. Unix experience tells us that the marginal extra overhead of making data layouts self-describing is paid back a thousandfold by the ability to evolve them forward without breaking things.


When you design code, organize it so future developers will be able to plug new functions into the architecture without having to scrap and rebuild the architecture. This rule is not a license to add features you don't yet need; it's advice to write your code so that adding features later when you do need them is easy. Make the joints flexible, and put “If you ever need to...” comments in your code. You owe this grace to people who will use and maintain your code after you.


You'll be there in the future too, maintaining code you may have half forgotten under the press of more recent projects. When you design for the future, the sanity you save may be your own.



[9] Pike's original adds “(See Brooks p. 102.)” here. The reference is to an early edition of The Mythical Man-Month [Brooks]; the quote is “Show me your flow charts and conceal your tables and I shall continue to be mystified, show me your tables and I won't usually need your flow charts; they'll be obvious”.


[10] Jonathan Postel was the first editor of the Internet RFC series of standards, and one of the principal architects of the Internet. A tribute page is maintained by the Postel Center for Experimental Networking.


[11] In full: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil”. Knuth himself attributes the remark to C. A. R. Hoare.


Prev Up Next

What Unix Gets Right Home The Unix Philosophy in One Lesson


你可能感兴趣的:(40年前的开发哲学,太过超前,很有预见性)