Thursday, December 20, 2001

Holiday Reading

This article appears in slightly different form in the November/December 2001 issue of IEEE Micro © 2001 IEEE.

Many of us have a little time off at this time of year. Here are two books that are worthwhile and not too hard to read. Take some time to read about embodied interaction, and COM+.

Embodied Interaction

Where the Action Is -- The Foundations of Embodied Interaction by Paul Dourish (MIT, Cambridge MA, 2001, 245pp,, ISBN 0-262-04196-0, $35.00)

There is a broad consensus that computer-based tools are hard to use. Many people believe that the problem lies in the way we design and build these tools. Recently I have reviewed quite a number of books that address that issue.

In The Humane Interface (Micro Review, May/June 2000), Jef Raskin takes a user-centered approach, loosely based on ideas that cognitive psychologist Bernard J. Baars discusses in his book A Cognitive Theory of Consciousness (Cambridge, 1988).

In The Inmates are Running the Asylum (Micro Review, Sept/Oct 2000), Alan Cooper makes the business case for taking interface design out of the hands of programmers. Cooper wants us to base our designs on intimate knowledge of the target users and their jobs.

In the Social Life of Information (Micro Review, Jan/Feb 2001), John Seely Brown and Paul Duguid ask us to stop looking at every human activity as a form of information processing. They show how ignorance of the social context of human activities often leads to failed attempts to automate those activities. In the same issue of Micro Review I look at Contextual Design by Hugh Beyer and Karen Holtzblatt. That book describes elaborate procedures for understanding and using the social context that Brown and Duguid talk about.

Paul Dourish approaches the same issues as these other authors, but from a somewhat different point of view. He uses the term embodied interaction to tie together several threads: tangible computing, social computing, and the philosophical tradition of phenomenology.

Tangible computing makes computer mediated interactions more natural by imbuing common objects with digital capabilities. For example, a telephone answering machine uses a row of marbles to represent your waiting messages. You place a marble into a receptacle to hear the message, then put it either into the save bin or the recycle bin. The power of tangible computing is that it takes icons to a higher level by giving symbolic meaning to actual physical entities. 

Social computing tries to understand the social context of the tasks you plan to automate. The trick here is to see the orderly social conduct emerge from the seemingly chaotic details, yet not overlook the hidden lubrication that makes the machine run. Using techniques like those that Beyer and Holtzblatt discuss, you identify the lubrication and thus avoid the disasters that Brown and Duguid talk about. One idea that arises out of social computing is coupling. Coupling is establishing and maintaining the relationship between action and meaning. For example, when you move the mouse, you might be positioning the cursor to insert text into a document, but you might also be bringing it back to the center of the mouse pad, or you might even be clearing space to set papers next to your keyboard.

Phenomenology isn't new, but it's a lot newer than the Cartesian philosophy that underlies many current approaches to interface design. Unlike Descartes, the phenomenologists do not separate mind and body. Meaning, for the phenomenologists, arises from our interactions with the world. The idea of an affordance, prominent in Donald Norman's writings on usability, comes from the phenomenologist J. J. Gibson. An affordance (a doorway, for example) makes its use obvious through its form.  

These three threads underlie embodied interaction. We exist in the physical world. We exist in a social context. We derive and create meaning by our interactions with these environments.

This all seems pretty theoretical, and you may be inclined to contrast it unfavorably with the precision and clarity of modern software design and construction. Dourish anticipates this reaction and turns it on its head. He sees current software design methods as abstract and theoretical, because they model users and interactions without first observing them. These methods, in his view, rely on disembodied cognition. The methods he advocates require you to become directly involved with the specifics of the actual practices for which you intend to provide computer based support. 

Because his methods depend so much on the different details of different situations, he feels he cannot reduce them to rules or guidelines. Instead, he lays out the following design principles:
  • Computation is a medium.
  • Meaning arises on multiple levels.
  • Users, not designers, create and communicate meaning.
  • Users, not designers, manage coupling.
  • Embodied technologies participate in the worlds they represent.
  • Embodied interaction turns action into meaning.
Because these design principles require a design context to make examples comprehensible, Dourish applies them to an issue that most readers are likely to be familiar with: the debate between convergence and information appliances.

Convergence is the idea that a single computer will handle spreadsheets, word processing, email, phone, fax, TV, movies, news, and any other communication or content delivery applications you might think of. An information appliance is a specialized computer-based device (a microwave oven, for example) that is simple to use because it doesn't have to do everything. Are these both the future of computing, or do they conflict?

When Dourish views this debate from the viewpoint of embodied interaction, he sees each of these alternatives as a technological solution, a way that designers try to manage or overcome the barriers and boundaries between applications. Users, on the other hand might be more interested in separating their personal from their work information, but not in separating writing a document from sending it. Dourish's design principles call for making users, not designers, the arbiters of where to place the boundaries.

This book is important reading for anyone engaged in designing computer-based systems to support human activities. It is full of interesting ideas and insights. I recommend it.


COM and .Net Component Services by Juval Löwy (O'Reilly, Sebastopol CA, 2001, 362pp,, ISBN 0-596-00103-7, $39.95)

COM is the acronym for Microsoft's component object model. This book, despite its title, is not about COM or Microsoft's new platform, .NET. It's about COM+. This requires a little explanation, and who better to provide that explanation than Roger Sessions.

More than three years ago, I reviewed Roger Sessions' interesting book COM and DCOM -- Microsoft's Vision for Distributed Objects (Micro Review, Mar/Apr 1998). As Sessions explains there, Microsoft has a coherent strategy for distributed applications. This strategy leads to architectures somewhere between Windows-specific clients on a Windows-specific network and generic browsers on a generic intranet.

When Sessions was first studying COM, he was intrigued to discover an early version of Microsoft transaction server (MTS). MTS, as Sessions points out, has little to do with transactions. Instead, it definitively solves the problem of how to share resources among clients in a client/server environment in a way that scales easily to large numbers of low-volume clients. According to Sessions, every current solution to this problem follows the principles of MTS. The Sun Enterprise JavaBeans framework, for example, is almost identical to MTS.

This brings us to COM+, and the point of this lengthy explanation. COM+ is MTS. That is, when Microsoft brought out its new version of MTS, they renamed it COM+. It provides the component services for COM and for Microsoft's new platform, .NET. This, finally, explains the book's title. The rest of the book is easy to explain.

Löwy follows two threads more or less in parallel through the book. The first thread is a thorough discussion of each component service -- what problem it solves, how it solves it, and what to be aware of when you use it. The second thread is a step-by-step explanation of how to use each service, with realistic code examples. Both threads are useful, but I especially like the thorough discussions, because Löwy understands the material and explains it clearly.

The chapter on queued components is a good example of this pattern. Many designers know the benefits in system robustness and improved performance of calling some object methods asynchronously, that is, of having the method return control to the caller before completing the requested operation. Löwy reviews the benefits, discusses the ad hoc pre-COM+ solutions and the problems they cause, then describes the COM+ solution, which uses MSMQ (the Microsoft message queue) and queued components. He sketches the implementation architecture, so you have a good idea of how the system handles the asynchronous calls and what the performance consequences of using them are. From there he goes on to explain how to design, code, and manage queued components. He even includes a substantial section on pitfalls.

Löwy's appendixes provide additional useful material. One appendix provides motivation and documentation for a component you can download from the book's website. The component logs calls, providing a useful debugging tool.

Another appendix describes COM+ 1.5, a version that appeared as Löwy was finishing the book. Keeping this material separate is a good solution to this potentially troublesome problem. Probably the material will migrate into the body of the next edition of the book.

The book also treats the way that COM+ provides component services for .NET. The book does not cover .NET (or COM, for that matter). An appendix provides a primer on .NET for the uninitiated, though it's hard to see why the uninitiated would be reading about component services for .NET.

This is a good book about an important technology that promises to be central to systems development in Microsoft environments for a long time to come. If you don't already know this material, this is a good way to find out about it.

Friday, October 26, 2001

Managing Development

This article appears in slightly different form in the September/October 2001 issue of IEEE Micro © 2001 IEEE.

As I write, less than a week has passed since the terrible events of September 11, 2001. Like many Americans I am torn between two urges: on the one hand, to move forward with business as usual; on the other, to devote attention to understanding what happened, and why, and to form my own opinion about what our country should do about it. 

The compromise between these urges is this abbreviated column. Normally I seek to make my column valuable to you by calling your attention to worthy books or software. This column does that. Normally I add further value by analyzing and describing the products I review in the light of my experience in the computer field. This time I haven't achieved my usual level of analysis and description. In other words, this time I tell you about some books that I think you ought to read, but I may not give you sufficient information to let you come to that conclusion for yourself.


Exploring Requirements -- Quality Before Design by Donald C. Gause and Gerald M. Weinberg (Dorset House, New York, NY, 1989, 320pp,, ISBN 0-932633-13-7, $50.45)

The biggest problem with software development is knowing exactly what to build. Communication between developers and their customers faces many obstacles: 
  • Different assumptions and terminology
  • Intermediaries with their own assumptions and understanding
  • Failure to understand and respect each other's expertise
  • Insufficient time to build a common understanding of the desired final product
  • All the ambiguities of natural languages
A charming example of the last point is the authors' Mary Had a Little Lamb heuristic, which encourages you to substitute synonyms for the words in a requirement. For example, Mary cheated an unsophisticated investor; Mary gave birth to a small good-natured child; Mary dined sparingly on mutton stew.  

A misunderstanding can cost a great deal to correct after the product is finished but very little to correct before the design phase begins. This more than justifies the cost of defining requirements carefully.

Twelve years after it first appeared, this book is completely relevant to today's development projects. Gause and Weinberg call on their many years of consulting experience to provide practical techniques for exploring requirements. That is, they show you ways to discover and overcome ambiguity, distinguish between requirements and preferences, and push back against constraints. They show you how to tell when you're done and how to translate requirements into acceptance tests. They even give you ways to make meetings more productive.

Given the frequent disconnect between "what the customer wanted" and "what the engineers built" -- the subject of a well known cartoon -- most companies would benefit greatly from improvements to the way they define requirements. This book is just what the doctor ordered. I recommend it to anyone who has anything to do with software development.

Mastering the Requirements Process by Suzanne Robertson and James Robertson (Addison-Wesley/ACM Press, Harlow England, 1999, 416pp, ISBN 0-201-36046-2, $47.99)

This book picks up where the Gause/Weinberg book leaves off. Gause and Weinberg describe a collection of valuable techniques. The Robertsons, based on their long and widely recognized experience, lay out a complete end-to-end process for defining sets of requirements that are complete, correct, and measurable.

The Robertsons call their process Volere. They don't say how they came up with that name, but I assume it has something to do with the Italian verb meaning "wish." The Volere process turns customers' wishes into a usable specification document. The process is far from linear. An overview diagram contains many circular paths and feedback loops -- as simple as possible, to paraphrase Einstein, but no simpler. Or as Gerald Weinberg says in his foreword, "every part of the process makes sense, even to people who are not experienced with requirements work." 

Several notable features make the Volere process especially worthwhile. The first is the freely available Volere template ( You can adapt this template for a requirements process to your own situation. Of course, to fully understand it, you probably need to read this book.

Two other notable features work together to keep your specification documents on target. The first is the concept of fit criteria, and the second is the quality gateway. Fit criteria are the rules (generally involving numerical measurements) for testing whether the final system meets the given requirement. The quality gateway is a process for deciding whether or not to include a fully specified requirement in the final requirements set.

If you write requirements documents or specifications, reading this book is your first requirement.

Software Requirements by Karl E. Wiegers (Microsoft, Redmond WA, 1999, 366pp,, ISBN 0-7356-0631-5, $34.99)

This book covers much of the same area as the other two books in this section, but it does so from a software developer's point of view. It also expands the idea of requirements to include specifications, and it addresses the way the requirements process fits into the entire development management process.

Wiegers uses a fictional chemical tracking application to provide context for the discussion, but he sometimes abandons it and brings in situations from his own extensive experience. Either way, he tries to keep his techniques practical.

I especially like the When Bad Requirements Happen to Nice People section. There Wiegers outlines the problems that his book is meant to solve. For example, he warns against gold-plating, a situation in which programmers add features that they think users will like, inevitably at the cost of taking time away from the most important features. 

Naturally, the problems Wiegers sees overlap substantially with the ones that Gause and Weinberg and the Robertsons address. For example, the Robertsons' quality gateway avoids gold plating.

I'd read the Gause and Weinberg book before reading this book. If your main interest is software development, you might read this book instead of the Robertsons' book, but if you have time, read all three.   

Project Management

A Guide to the Project Management Body of Knowledge, 2000 ed by Project Management Institute (PMI, Newtown Square, PA, 2000, 228pp,, ISBN 1-880410-23-0, $35.95)

Project management professionals apply a variety of theories and practices to their work -- some experimental, others tried and true. The combined lore of these professionals constitutes a large body of knowledge.

The PMBOK Guide, as this book is known in project management circles, identifies and describes the subset of that body of knowledge that the Project Management Institute deems generally accepted. It does not teach this body of knowledge, but it provides an excellent map, a well organized skeleton with a little flesh on the bones.

This is a basic reference for anyone seeking certification in project management. It is also a helpful guide for anyone seeking to understand the underlying model of that ubiquitous but inscrutable tool, Microsoft Project. If you use Microsoft Project, but don't always understand what it's doing, read this book.

While this book is basic, if your interest is in a specific aspect or field of project management, you should also look at more narrowly focused books. 

Information Technology Project Management by Kathy Schwalbe (Thomson/Course Technology, Cambridge, MA, 2000, 512pp,, ISBN 0-7600-1180-X, $53.95)

The PMBOK Guide is a scant 228 pages. This book, at more than twice that length, seeks to flesh out the PMBOK and specialize it to a specific industry. Schwalbe writes in the format of a textbook, with discussion questions, exercises, and suggested readings. The layout and printing are not up to the standards of mainstream publishers, but if you can get past that, the book provides a great deal of information in an easy to assimilate format.

One attractive feature of this book is that it uses Microsoft Project to develop class projects. Using Microsoft Project without understanding the underlying project management model can be confusing and difficult. The examples in this book help you avoid the confusion.

Special Edition Using Microsoft Project 2000 by Tim Pyron (Que, Indianapolis IN, 2000, 1314pp plus CD,, ISBN 0-7897-2253-4, $39.99)

This book is well organized, beautifully laid out and printed, well written, comprehensive, and insightful. The notes and cautions add real value by tapping into the author's extensive experience with the product. 

The detailed table of contents reflects the logical structure, and the excellent index makes it easy to find information in this huge volume.

If anything can make Microsoft Project comprehensible, this book is it. I recommend it to anyone who wants to use the real power of this tool.

Sunday, August 26, 2001

Oracle, Extreme Programming, Project Management

This article appears in slightly different form in the July/August 2001 issue of IEEE Micro © 2001 IEEE.

This time I look at books that delve into how things work. One is an outstanding overview of a complex software package. The others deal with real projects and the lessons learned from them.

How Oracle Works

Oracle Essentials, 2ed by Rick Greenwald, Robert Stackowiak, and Jonathan Stern (O'Reilly, Sebastopol CA, 2001, 364pp, ISBN 0-596-00179-7,, $34.95)

I love books like this. I wish there were more of them. I spent over a year working with the Oracle server technology publications group and wrote some of the documentation that this book is based on. The few hours that I spent reading this book gave me a much better sense of all the pieces and how they fit together than I ever achieved reading or writing Oracle documentation.

The economics of the computer book business seem to favor books that cover a wide variety of features in a cursory way. Many books provide excellent task-oriented instructions for end users or even administrators, but give them little insight into the underlying structure and concepts.

The authors of Oracle Essentials follow a different path. They present a concise, coherent picture of the entire Oracle system. This picture does not cover every feature of Oracle, nor does it cover any feature in complete depth. The picture is broad enough and deep enough to give you a good understanding of the main structures, processes, and issues involved in planning for and deploying Oracle, developing and optimizing schemas and applications for it, and administering its use.

One of the hardest tasks in working with Oracle is understanding the kind of big picture that this book presents. Thousands of highly skilled software developers have worked on the system over a period of more than twenty years. Inevitably, layer upon layer of enhancements have distorted and obscured the clarity of the original design. Furthermore, the system is so large and complex that people who start to work with it usually join a group that specializes in one aspect of it. Even the oldtimers in the group may not know much about the rest of the system. And because they are comfortable in their corner of the product, they may not even consider it important to orient newcomers to the whole system.

The difficulty with this situation surfaces when different groups of specialists must work together to solve a common problem. Activities ranging from developing new versions of the product to designing new applications suffer from the inability of groups of specialists to understand one another's problems and issues. For that reason, I think this book will do as much good within the walls of Oracle Corporation as it will outside.

If you work with any aspect of Oracle, or if you'd just like to understand the ins and outs of an important complex technology, this book is a must. 

Project Lessons

Last time (Micro Review, May/June 2001), I wrote about project retrospectives. The three books I look at here don't arise out of formal retrospectives, but they do provide insights into real projects.

Roundtable on Project Management--A SHAPE Forum Dialogue, ed by James Bullock, Gerald M. Weinberg, and Marie Benesh (Dorset House, New York NY, 2001, 198pp, ISBN 0-932633-48-X,, $21.45)

Gerald Weinberg moderates the SHAPE (Software as a human activity, performed effectively) forum, an online discussion group. He charges subscribers $60.00 per year, which compensates him for the time he spends keeping the signal to noise ratio high. Weinberg has been working in this field for a long time. Dorset House has recently republished two of his classic books from the 1970s (The Psychology of Computer Programming and Introduction to General Systems Thinking) in silver anniversary editions. I recently reread parts of The Psychology of Computer Programming that seemed very radical to me when I first read them in 1971. As I look around at today's programmers, I can see what a large, beneficial effect that work has had.

This book arises from several SHAPE discussion threads on project management. The unifying theme is the analysis of a project that didn't go as well as its leader, a SHAPE subscriber, had hoped. The editors have distilled from the original threads the outline of a treatise on project management. The insights of 40 experts take you quickly through the entire process -- from getting started properly to drawing project and personal lessons when the project is complete.

The conversation moves briskly, and the insights are marvelous. I'm sure that experienced project managers will find much to like in this book. People who participated in the original threads will surely welcome the summary. For beginners, on the other hand, it may be too concise. If you're a project manager who finds this book tantalizing but unsatisfying, consider subscribing to SHAPE and lurking for a year.

In reading this conversation, I noticed that the participants take certain background information for granted. For example, if you haven't read Weinberg's The Secrets of Consulting (Dorset House, 1985), you probably have no idea who Levine the Genius Tailor was or how the Second Law of Pricing works. If you haven't learned about the Meyers-Briggs personality types, you probably have no idea what an NT is. Such shared parables, aphorisms, and classifications are more common in the liberal arts than in technical fields. In the long run they greatly increase the depth and efficiency of the dialog. The book's bibliography consists of only 17 items (4 by Weinberg), so it shouldn't be too hard to come up to speed.

Extreme Programming in Practice by James Newkirk and Robert C. Martin (Addison-Wesley, Boston MA, 2001, 224pp, ISBN 0-201-70937-6,, $29.99)

I have reviewed several books about extreme programming (XP) over the last year or so. XP is an iterative and incremental development technique. The widespread excitement over XP, given its completely egoless programming approach, is a good example of how far we have come in the 30 years since Weinberg's The Psychology of Computer Programming first appeared.

In this book the authors describe an actual small project from start to finish -- from exploration to lessons learned. They show the use cases, plans, time estimates, and source code listings. They take you through false starts, mistakes, testing, refactoring, misunderstandings between them and their customer, and all the warts of a real programming project.

Call me strange, but to me this book is like a good novel. It's a good story that's hard to put down, and when you're done reading it, you have a deeper understanding of the characters and situations that it explores. If you're using XP, or even if you're just considering it, I know you'll enjoy this book.

J2EE Technology in Practice by Rick Cattell, Jim Inscore, et al. (Addison-Wesley, Boston MA, 2001, 328pp, ISBN 0-201-75870-9,, $39.99)

In June I attended the JavaOne conference in San Francisco. J2EE (the Java 2 platform, enterprise edition) was the main focus of this year's conference. The organizers gave copies of this book to all 20,000+ attendees. If you're interested, you shouldn't have much trouble finding a copy to borrow.

This book is an excellent example of technical marketing. It first makes the business case for J2EE, then provides a useful summary of its components and how they fit together. The remainder of the book consists of case studies of 10 organizations that developed enterprise-scale applications around J2EE. The organizations wrote the case studies, which adds credibility to the business case. I find these accounts quite interesting. They lay out the problem area and give a clear, moderately detailed overview of the technical approaches the organizations took to designing and implementing the software. I especially like the sections on technology adoption and design patterns in AT&T Unisource's account of their CORE system for managing their voice network.

This isn't a deep book, but it contains enough detail to make it very helpful. You can learn a lot by reading about real projects. Unfortunately, organizations tend to keep such information to themselves. This book does a valuable service by making details of so many real world projects public. If you develop enterprise software, it's worth reading. 

Tuesday, June 26, 2001

Project Tools

This article appears in slightly different form in the May/June 2001 issue of IEEE Micro © 2001 IEEE.

This time I look at books that describe important tools for planning and reviewing development projects.


According to its official documentation (,
The Unified Modeling Language (UML) is a language for specifying, visualizing, constructing, and documenting the artifacts of software systems, as well as for business modeling and other non-software systems. The UML represents a collection of the best engineering practices that have proven successful in the modeling of large and complex systems.
In other words, UML provides a standard way to create diagrams that represent complex systems. It does not mandate a programming language or a development methodology, even though it arose out of long experience with both. Again, the official documentation states
Although the UML does not mandate a process, its developers have recognized the value of a use-case driven, architecture-centric, iterative, and incremental process, so were careful to enable (but not require) this with the UML.
This attempt at generality makes it hard to get a handle on UML, and that's where the following book comes in. 

UML Explained by Kendall Scott (Addison-Wesley, Boston MA, 2001, 169pp, ISBN 0-201-72182-1,, $29.95)

Kendall Scott is a technical writer who has been helping software developers write about their field for a number of years. He is a contributing author of UML Distilled (Addison-Wesley, 1997) and Use Case Driven Object Modeling with UML (Addison-Wesley, 1999). You can view his UML dictionary at

About five years ago, in an attempt to find more interesting work than writing API documentation, he obtained and tried to read an early version of the Unified Method documentation (by Grady Booch and James Rumbaugh). He decided that the only way to understand it was to rewrite it. One thing led to another, and UML Explained is the result.

Scott sacrifices generality for clarity. While UML is officially process independent, Scott assumes an iterative and incremental process that starts from use cases and proceeds in such a way that the connections between the use cases and the models are always clear ( Along the way he introduces all of the main UML elements and shows how to use them in the four phases and five workflows of that process.

Scott imagines an online bookstore that looks like a stripped down Each time he describes a UML element, he illustrates it with concrete elements of the online bookstore. As a frequent user, I find these examples instantly accessible, making the underlying UML conventions easy to understand.

While I do recommend this book, I find parts of it inexcusably sloppy. In addition to the lack of basic copyediting, which is fast becoming the industry standard, the book has a couple of major usability flaws. The layout fails to synchronize the text with the many figures. For example, on page 32, between figures 3-17 and 3-18, is a block of text that refers to figure 3-20. As I read this text, my eye is drawn to figures 3-17 and 3-18, then to figure 3-19 on the facing page. Finally, if I turn the page, I find figure 3-20 on page 34. This kind of confusion occurs again and again throughout the book.

Another usability flaw is the way Scott does cross-references. For example, on page 115, in the midst of chapter 9, Scott says "See 'Template Classes' in chapter 6 for more about template classes. See Figure 3-9 for more about roles." The targets of these references are on page 85 and page 28. There is no good excuse for failing to include those page numbers in the cross-references. As with the layout problem, this flaw occurs consistently throughout the book.

If you're willing to overlook its editing and usability flaws, the book provides a clear explanation of UML in a mercifully small number of pages. If you work with software developers who use UML, or if you wish your development process were more predictable and reliable, you ought to read this book.   


A third of a century ago the NATO Science Committee convened a conference to deal with the software crisis. The conference didn't solve the problem, but they coined the term software engineering. Today the software crisis is not yet under control, but many companies have adopted software engineering processes designed to make development more predictable and reliable. A key element of most such processes is the project retrospective, also commonly known as the postmortem.

The first postmortem that I can recall was not formal or officially sponsored. The late Rudolph Langer, a talented and introspective development manager that I worked with in the 1960s wrote a memo about lessons learned from our last project and circulated it for comments. Langer had served in the military, where such retrospective examinations are routine, but I don't know if that's what gave him the idea. 

In 1975, Fred Brooks published The Mythical Man Month (2ed, Addison Wesley, 1995), which detailed the lessons he had learned from leading the development of IBM's System 360 operating system. This is the essence of the project retrospective: to look back on what happened, to evaluate and learn from what went right and what went wrong, and to use this information to make the next project more successful.

Most large software development companies have instituted formal process improvement programs -- either because they believe strongly in continual improvement or because they need to satisfy an external auditor. All process improvement programs incorporate introspection and feedback and hence call for some sort of project retrospective. Many companies, however, give only lip service to this valuable opportunity to learn and grow. Often, they'd rather let sleeping dogs lie. Even more often, however, they don't understand the potential benefits, so they are not willing to commit the resources necessary to do it right.  

Project Retrospectives -- A Handbook for Team Reviews by Norman R. Kerth (Dorset House, New York NY, 2001, 288pp, ISBN 0-932633-44-7 $39.45)

If you have participated in project retrospectives and found them unrewarding, this book will open your eyes. Norman Kerth has been facilitating project retrospectives for twenty years. His clients are some of the best known software firms in the world.

Kerth wrote this book for facilitators -- people who lead project retrospectives. These are typically outside consultants, because, as Kerth explains very clearly, you can't facilitate a retrospective of a project that you have participated in. In this review I don't focus on the material that is mainly for professional facilitators.

Many people who will never facilitate retrospectives, however, will benefit from reading this book, because it is so full of information, psychological insight, and wisdom. The book is full of examples that give a clear idea of how the process works. Kerth intersperses parables and cartoons with the more formal material, making the book hard to put down.

Kerth also takes time to build the business case for doing retrospectives. A retrospective entails three or four days, usually at some sort of offsite conference facility, with meals and lodging for up to 30 people, and a professional facilitator -- a considerable expense, but one Kerth justifies quite simply: If the exercise saves you six days on your next project, which he assures you it will, you come out with a profit.

If you're doubtful about Kerth's math, or if you want to get a better sense of why such an elaborate process is justified, be sure to read the first two chapters. You'll probably want to read more, but even if you don't you'll come away with a better idea of the value and proper goals of a project retrospective.

I recommend this book to anybody who regularly works on or oversees development projects.

Friday, April 27, 2001

Pervasive Technologies

This article appears in slightly different form in the March/April 2001 issue of IEEE Micro © 2001 IEEE.

The books I look at this time are about pervasive technologies. The publishers are highly respected producers of books for programmers and systems designers, but these books are for a wider audience.

Computers as Components: Principles of Embedded Computing System Design by Wayne Wolf (Morgan Kaufmann, San Francisco CA, 2001, 688pp, $64.95)

Wayne Wolf is a professor of Electrical Engineering at Princeton University, a former employee of AT&T Bell Laboratories, and a graduate of Stanford University. He is editor-in-chief of IEEE Transactions on VLSI Systems and a well known researcher in the fields of embedded systems and hardware/software codesign. He is eminently qualified to write this much needed book.

Embedded systems are as old as microprocessors, which means they have been around for about thirty years -- longer if you count dedicated minicomputer-based "turnkey" applications, such as the clinical laboratory data acquisition and reporting systems I worked on in the 1960s. Creating such systems has remained challenging. Programming and debugging facilities tend to be limited by the often primitive and non-standard nature of the underlying hardware. In the late 1980s, for example, when object oriented high level languages and integrated development environments were already widely available, I worked on a new version of a medical instrument. It used an 8-bit Intel 8085 microprocessor (introduced 12 years earlier), programmed in assembly language, to control a complex control panel and an automated data acquisition system. The underlying software was based on a minicomputer operating system I had worked on in the 1960s. The debugger allowed me to examine the contents of memory (in hexadecimal) and to place a breakpoint at a specified memory location. I had no simulator -- the test bed was the instrument itself.

Designing hardware and software to work together has lagged behind software development on standard software platforms. UML diagrams, CRC cards, concurrent engineering, and design reviews, for example, are in the main stream of software development, but embedded system development projects rarely use them. In her introduction, Lynn Conway, herself the coauthor of a landmark book, praises this book for providing a systematic approach to embedded system design, based on such systems development techniques and processes. 

Now, as the market for embedded systems expands exponentially, the time has come to civilize this frontier. Wayne Wolf's book shows the way to do that. It is essentially a course in computer science and software development, oriented totally to the problems of embedded systems designers. Wolf assumes that his readers are not computer scientists, so he covers many topics that other books take for granted. He complements this self-contained introduction to embedded computer science with a number of detailed design examples, taken from real projects at places like Bell Labs.

If you are planning to teach a course in embedded systems design, or if you are a long time practitioner and want to bring your skills up to date, this is the book for you. 

Introducing .NET by James Conrad et al. (Wrox, Birmingham UK, 2000, 462pp,, ISBN 1-861004-89-3, $34.99)

Sixty-five million years ago, dinosaurs ruled the earth. A huge meteor struck, and the environment changed. The dinosaurs died out, and eventually we arrived at today's flora and fauna.

Ten years ago, Microsoft ruled the computer world. Objects, the web, Java, and XML struck in rapid succession, and the environment changed. Software can evolve more rapidly than dinosaurs could, and Microsoft's new platform, .NET, makes MS-DOS and Windows 3.1 look Jurassic by comparison. The .NET platform addresses the challenges of objects, the web, Java, and XML. 

To support object-oriented development, .NET replaces COM with the common language runtime (CLR), which allows all Microsoft languages to share an object management environment. Object have formats that are independent of source language. They share garbage collection, exception handling, and a common class library, including common datatypes.

To support deploying applications on intranets or the web, Microsoft is developing ASP.NET, a complete overhaul of the slow, error-prone, language-limited, non-object-oriented ASP (active server pages) that today's developers are forced to use. Complementing that development is ADO.NET, which replaces today's ADO (ActiveX data objects) in much the same way that ASP.NET replaces ASP. Finally, the simple object access protocol (SOAP) and Microsoft's web Services and web Forms promise to make web-deployed applications look very similar, in appearance and in implementation, to applications deployed on desktop systems.

Microsoft's response to Java is a little more subtle, entangled as it is in legal wrangling. The .NET platform moves toward Java's "write once, run anywhere" promise by encapsulating the operating system in a common set of classes that have different implementations on different platforms. These serve a function similar to the Java virtual machine. Microsoft has also sidestepped Sun's control of Java by developing its own Java-like language called C#. While similar in form and philosophy to Java, C# introduces a few improvements. For example, it standardizes the getting and setting of object properties.

Finally, to mix a metaphor, Microsoft has embraced XML and jumped into it with both feet. Much of the .NET functionality depends on the XML-formatted metadata that accompanies every object. XML also provides the base format for remote interfacing, and it serves as the working data format for ADO.NET. Both the System and System.Data class hierarchies devote a namespace to XML.

Wrox Press specializes in books for programmers. They have assembled a team of 10 men, all with programming backgrounds, to put together a survey of Microsoft's .NET plans, based on Microsoft's public statements and on their own experiences with the public beta release of the .NET software development kit. They give a pretty coherent picture, hedged in disclaimers, of what the first release -- probably still about a year away -- will look like. If you expect to develop software for Microsoft systems in the next few years, you need to know all about .NET, and these authors are excellent guides to the territory. Don't expect definitive information, but if you can stand the missing pieces and loose ends of a work in process, this book will get you up to speed. 

Learning XML: Creating Self-Describing Data by Erik T. Ray (O'Reilly, Sebastopol CA, 2001, 368pp,, ISBN 0-596-00046-4, $34.95)

Erik T. Ray (not to be confused with Eric J. Ray, whose HTML books I reviewed here a few years ago) describes himself as a software wrangler and XML guru for O'Reilly and Associates. He wrote this relatively short book to give readers a bird's eye view of the XML field. He succeeds remarkably well at that, but it is more than a bird's eye view. At appropriate points Ray delves deeply into the details by presenting complete, clearly written examples.

If you plan to work with XML to produce technical documentation, this book pays for itself many times over. Ray includes a completely worked out DTD called Barebones DocBook, which you can probably use as is. He also includes an XSLT stylesheet for producing HTML from Barebones DocBook documents.

If you plan to write programs to process XML, you might use, and can learn a lot from reading, Ray's Perl code for an XML syntax checker.

Many authors dump sample code into their books, but the XML, XSLT, and Perl examples in this book are well organized, clearly formatted, well annotated, and easy to understand.

For brevity, completeness of coverage, clarity of writing, and usefulness of examples, this is the best XML book I have seen. I recommend it highly.

Tuesday, February 27, 2001


This article appears in slightly different form in the January/February 2001 issue of IEEE Micro © 2001 IEEE.

In 1968 Arthur C. Clarke and Stanley Kubrick created the film 2001, A Space Odyssey. In the January 2001 issue of National Geographic, Clarke discusses prospects for achieving in real life some of the fictional events of the film. He does not, however, discuss HAL, the murderously megalomaniacal computer that both enabled and endangered the mission. HAL seems even further from reality today than in 1968.

One of the books I look at this time quotes Berkeley computer science professor Robert Wilensky as saying that the social and psychological issues are so hard that computer scientists can only hope that they aren't on the critical path. I think this hope is wishful thinking, as HAL's continuing unreality suggests.

This time I look at books that confront the context, largely social, in which today's technological advances occur. Many technologies make great leaps forward at the surface level but never really take hold. Entrenched technologies have largely implicit and unnoticed support systems. Recognizing these systems and addressing the problems they solve is the way to achieve deep and lasting change.

The first book I look at asserts that tunnel vision leads pundits and producers alike to misjudge the impact and support needs of new technologies. It surveys the main areas in which infohype has gone badly wrong, and explains why.

The other books I look at describe methodologies for developing software-based products. The first methodology is Extreme Programming (XP), which I discussed in my review of Kent Beck's Extreme Programming Explained (Micro Review, Nov/Dec 1999). XP focuses heavily on the actual practice of programming and the realities of customer-developer communication. XP is a highly effective way for small teams to develop software incrementally.

The second methodology is Contextual Design (CD), which lies at the other end of the size spectrum. It is based on Karen Holtzblatt's Contextual Inquiry method, which places heavy emphasis on observation and interviews to find both explicit and unnoticed aspects of the customer's activity. CD assumes a stable, functioning customer process. It organizes the resources of a large development organization or corporate IT department to achieve an auditable design that relies on structured data about the customer's activity to resolve design questions.

XP is like planning and executing a drive to the store. CD is like planning and executing a voyage to Jupiter. Both methodologies, however, have ideas you can apply to projects of other sizes.

Social Context

The Social Life of Information by John Seely Brown & Paul Duguid (Harvard Business School Press, Boston MA, 2000, 332pp, ISBN 0-87584-762-5,, $25.95)

John Seely Brown is head of the Xerox Palo Alto Research Center (PARC). Paul Duguid is a researcher at UC Berkeley. He specializes in social and cultural issues in education. Both draw heavily on their backgrounds for examples to support the analyses in this book.

Information is a convenient construct. It gives us insight into many aspects of modern technology. Like all models, however, it ignores many details. Such simplifications can lead to great advances -- Newton's equations for planetary orbits, for example. This only works when the ignored details are insignificant to the problem at hand. 

Unenlightened or unscrupulous futurists, business consultants, and product developers have applied the information construct too broadly:
  • Books are information containers.
  • Conversation is information interchange.
  • Learning is information absorption.
  • Organizations are information consolidators.
  • Office work is information handling.
  • Business processes are information flows.
These and many similar oversimplifications all contain grains of truth. Ideas based on them, however, have fallen far short of delivering their promised benefits.  For example, the reengineering movement of the mid 1990s succeeded for well defined processes like procurement or shipping. It failed for fuzzier, less infocentric tasks like insurance claim processing. Such tasks depend on more than just a formal process for moving disembodied information. They have social components: the mutual support and shared knowledge of specific human beings.

In case after case the authors return to the same theme: from product development to company reorganizations, innovations fail when they ignore or try to suppress the social support systems that made the pre-innovation situation work. And sometimes innovations succeed only because people find ways to sneak the support systems back into the picture. This leads to a valuable clue to making struggling systems succeed: pay attention to what won't budge. If it's important to the people using the system, include it in the system. Don't try to stamp it out. Reinforce it instead.

The authors cite an excellent example of this. Dispatchers of field support reps got feedback from the dispatched reps when the reps called in for their next assignments. After a while the dispatchers became skilled at analyzing customer problems and only sending the field reps when necessary. When the company adopted a new system that didn't require the reps to call in, new dispatchers were less effective -- except for one who happened to sit near an old-timer and managed to learn from her. By recognizing and fostering this support system, the company helped the new system to succeed.

The authors summarize their theories of education and learning. They have obviously thought a lot about this subject, but they decided not to elaborate in this book. Their presentation is concise, but thought provoking. The essence of it is to identify the core competences of a university. If these were simply variations on causing students to absorb information, universities would indeed be threatened by the proliferation of online courses. Instead, the authors identify the social factors that have made universities such enduring institutions. 

I especially like the fact that they identify misrepresentation as one of the core competences of a university. What they mean by this is that a degree from a good university bundles in a certain amount of experimentation on the student's part that probably could not stand on its own if unbundled and offered as evidence of the student's qualification for a specific job. It is a cross-subsidy of an essential part of becoming an educated person.

One of the most interesting parts of the book deals with the relationship between organizations and the collective knowledge of their employees. On the one hand, many organizations have difficulty accessing and using their knowledge. The perfect example of that problem is the difficulty Xerox had in applying the GUI technology that their employees at PARC developed. Xerox owned the knowledge, but it could not use it.

On the other hand, knowledge flows into and out of organizations like water -- regardless of intellectual property laws. Networks of practice link employees in different firms, especially in concentrated areas like Silicon Valley. The same example applies here: the folks at Apple had no difficulty understanding, and eventually exploiting the Xerox PARC GUI work. Furthermore, because knowledge flowed more freely within Apple than it did within Xerox, Apple was able to bring the GUI to market.

The authors also discuss office work and the way some analysts ignore its social aspects. Anyone who has accessed a company network from off-site equipment -- in a home office, for example -- rediscovers the value of division of labor. Functioning as your own system administrator and IT department is an expensive and not very valuable exercise in reinventing the wheel. Also, incidental learning is harder to come by and shared knowledge harder to access from outside the traditional office site. By ignoring the invisible role of social systems, an infocentric view of office work fails to address and solve these problems.

The advertising agency Chiat/Day performed the ultimate experiment in stamping out the social aspects of office work. Employees had no permanent equipment or desk space. Instead they checked out communal equipment from a pool each morning, then sat wherever they could find space. Each night they returned the equipment to the pool and went home. This experiment, as you probably guessed, failed. The authors attribute the failure to Chiat/Day's management's failure to recognize and understand the value of the social aspects of office life.

The authors apply a similar analysis to agents. Viewing humans as goal pursuing agents hides the importance of the social nature of learning, taste, choosing, brokering, and negotiating. HAL can handle all of these human foibles, which is why HAL remains in the realm of fiction.

Finally, I love the analysis of printed documents and the social importance of their fixity. Among other things, the authors say 
Efficient communication depends not on how much can be said but on how much can be left unsaid -- and even unread -- in the background. And a certain amount of fixity, both in material documents and in social conventions of interpretation, contributes a great deal to this sort of efficiency.
This book is full of common sense. It deserves to become a strong and beneficial influence on the way we think about designing software and processes.

Extreme Context

Planning Extreme Programming by Kent Beck (Addison Wesley, Boston MA, 2000, 158pp, ISBN 0-201-71091-9,, $29.95)

Extreme Programming Installed by Ron Jeffries, Ann Anderson & Chet Hendrickson (Addison Wesley, Boston MA, 2000, 286pp, ISBN 0-201-70842-6,, $29.95)

The XP community has flourished since Kent Beck's first book on the subject appeared a little over a year ago. Addison Wesley labels the two new books as part of The XP Series, and as Kent Beck points out in his foreword to Extreme Programming Installed, the fact that somebody else wrote it bodes well for the future of the discipline.

Extreme Programming Explained is full of good ideas, but very concise, and Beck's new book is the same way. Extreme Programming Installed gives a more detailed and patiently elaborated view of how to do XP.

Because I discussed XP in my Nov/Dec 1999 column, I don't go into much detail about the basics here. The key tie to the theme of this column is the fact that, as Beck says, XP practices depend on human creativity and accept human frailty. They integrate the social support and informal communication that more mechanical methodologies might ignore or try to suppress. They use index cards and a few simple wall charts rather than lengthy requirements documents, design specifications, and project tracking software.

The XP books include little actual code. Occasional examples in Smalltalk, even as simple as they are, can put off many programmers, so I'm glad that Extreme Programming Installed contains two detailed examples of how to apply XP principles in the Java environment. I think all programmers -- whether they wish to adopt XP or not -- should read the chapters XPer Tries Java and A Java Perspective. These chapters convey a sense of the importance XP assigns to development tools, and they give a remarkably clear explanation of how to let testing drive design.

XP is simple and powerful. Get these books and read all about it. Then find a group that feels the same way, hook into the XP community, and put XP to work.

Heavyweight Context

Contextual Design by Hugh Beyer and Karen Holtzblatt (Morgan-Kaufmann, San Francisco CA, 1998, 496pp, ISBN 1-55860-411-1,, $44.95)

This admirably clear book patiently and thoroughly describes a methodology for gathering and using the data necessary for true customer-centered design. By contrast with XP, which adds a customer to the design team, CD starts from the premise that customers can't represent themselves in the design process. When they are outside their usual work environments, they can't explain what they do or what really matters to them. Furthermore, they don't understand the details of developing the software for their own areas, let alone the other issues that developers have to keep straight.

Whereas in XP the customer speaks with a single voice, CD assumes that there are many customers and that nobody in the customer organization can represent them. The only way to produce a system that supports all of their needs is to understand all of their needs.

For these reasons, CD begins with a research project: find out everything about how each user does his or her work. An interviewer goes to the customer's work site and for several hours acts as a combination between an apprentice to the interviewee and the interviewee's partner in documenting the work practice. The interviewer compiles and interprets the information, then goes over it with the interviewee until they reach a common understanding.

Interviewers repeat this process for each class of worker until they have a complete set of data. They then produce narrative descriptions and graphical representations of a variety of models and arrangements of the data.

A key to the process is ensuring that the developers fully understand and assimilate the results of the customer study. This becomes the data that developers turn to when they have questions or disagreements.

This summary hardly does justice to the entire CD methodology, but the book covers every aspect of it thoroughly. Following a pattern that suggests that they applied their methodology to their own process, the authors make it easy for you to survey CD at the executive summary level or to delve into the details of areas that interest you. At every level the writing is clear and the concepts are easy to understand.

This book is not for everyone. If you are an IT manager or the head of a large consulting organization, you should assign someone to investigate whether CD can help your organization understand and serve your customers.