These notes based on (paraphrasing, quoting) Fred Brooks (mid 70's on experience in 60's doing IBM OS 360)
Also glanced at resources:
Prof. Redmiles summarized as:
Issues in software development; US Government Spending
Software like a tar pit: The more you fight it, the deeper you sink!
No one single thing seems that difficult; "any particular paw can be pulled away." Simultaneous and interacting factors brings productivity to a halt.
A program is just a set of instructions that seems to do what you want. All programmers say "Oh, I can easily beat the 10 lines / day cited by industrial programmers." They are talking about just coding something, not building a product.
A product (more useful than a program):
Brooks estimates a 3x cost increase for this.
To be a component in a programming system (collection of interacting programs like an OS):
Brooks estimates that this too costs 3x.
A combined programming system product is 9x more costly than a program.
Why does software fail?
Brooks says programmers are optimists (everything will go right etc...). Incompleteness and inconsistencies become clear only during implementation. He concludes that experimenting, "working out" are essential disciplines.
Each task has a nonzero probability of failure or slippage. Probability that all will go well is near zero.
Cost varies with manpower and resources, but progress does not! Hence, using "man month" (person month) as a measure is misleading and dangerous. They are interchangeable only when there is no interaction whatsoever between tasks.
For partionable tasks that require communication, must add communication to the cost of completion. Communication is:
For building a system (requires lots of communication), the communication effort quickly dominates the effort. Adding more people lengthens not shortens the schedule
Testing cost underestimated always. Brooks suggests:
TJP: don't forget that writing test harnesses can be almost as much work or sometimes more as writing the actual code.
Delays during final testing are very demoralizing!
Urgency of boss forces programmers to agree to unrealistic schedules.
It is very hard to defend an estimate (good or bad); people use "hunches"
The difference between a good programmer and bad programmer is at least:
The 20k/yr programmer is more than 10x more productive than 10/yr programmer (1960's salaries...i hope) ;)
Data showed no correlation between experience and performance (but clearly there must be some).
"Small" team shouldn't exceed 10 programmers.
Managers want small sharp team, but you can't build a very large system this way.
OS360 took about 5000 man years to complete. A team of 200 programmers would take 25 years (assuming simple linear partitionable tasks) Took only 4 years with 1000 people (quoting from book these numbers but don't seem to add up).
Instead of hiring 200 programmers what about this: hire 10 superstars with say 7x productivity factor and 7x reduction in communication costs. 5000 hours / (10 x 7 x 7) = 10 years. Hmm...may not work even still.
Harlan Mills suggests surgical teams: team leader (chief programmer==surgeon) with supporting surgeons, nurses. Chief does the cutting and other support.
Might be tough to find right mix of people, desires, skills (i.e., who wants to do the testing?)
What's the difference? Surgical team: project is surgeon's brainchild and they are in charge of conceptual integrity etc... In collaborative team, everyone is equal and things are "designed by committee". Causes chaos.
How to scale? Large system broken into subsystems assigned to surgical teams. Some coordination between surgical team leaders.
TJP: in my experience, having a single mind behind ANTLR has made all tools, concepts hold together well. Most projects are "touched" by many grad students as they drift through a department and work on the tool for a prof.
Ratio of functionality / conceptual complexity is important.
One or a few minds design, many implement (per surgical team).
Brooks argues for separating implementation by using clock (hands, etc...) design with the many implementations. Architecture is what happens, and implementation is how it happens.
Defending aristocracy he says: Even though implementators will have some good ideas, if they don't fit within the conceptual integrity, they are best left out.
First system tends to be small and clean. Knows he/she doesn't know everything and goes slowly.
As the system is built, new features occur to them. They record these ideas for the "next system."
With the confidence of having built the previous system, the programmer builds the second system with everything. Tendency is to overdesign. Cites the IBM 709 architecture that is an update to 704. So big that only 50% of features are used.
Another version of the effect is to refine pieces of code or features from old system that just aren't that useful anymore.
TJP: I tend to consider the next system to be functionally exactly the same but with a much better implementation. A few new features are ok. Actually ANTLR is less functional that old PCCTS!
To avoid can have concepts like feature x is worth m bytes and n ns of time.
Managers should hire chiefs that have at least two systems under their belt.
How to communicate? In as many ways as possible.
1. A program is not a product nor system
2. Adding programmers to fix a delay only makes it take longer
3. Plan to throw one away, you will anyway. This book is ancient, but he says The only constancy is change itself and plan the system for change, which could come straight from the extreme programming books.
4. Second system affect: overdoing new feature list to overcome weakness in first system.