Tuesday, September 1, 2015

DITA


The Darwin Information Typing Architecture (DITA) emerged from a long line of internal IBM projects based on an earlier markup language called SGML. Approximately 10 years ago, IBM bequeathed DITA to the world as an open source project. You can read all about it at dita.xml.org. DITA is based on the idea of semantic markup, that is, embedding metadata in a document to describe the structural roles of its elements without prescribing formatting for those elements. This has many benefits but can impose a large overhead cost on writing projects. Managers of small projects find it hard to justify that overhead, but tools keep getting better and simpler.

DITA is highly flexible, but in its most common use it marries semantic markup with another long-developing trend in technical writing: topic-based writing, that is, writing small, independent topics that can be assembled into documents and help systems by means of external structural descriptions called maps. This enables reuse and single-sourcing. When the resulting material needs to be translated into additional languages, this approach can save large sums of money. If you say something in only one place, then you don’t have to translate many similar versions of the same information.

DITA’s version of topic-based writing rests on the idea that each topic can consist purely of one type of information: concepts, reference material, or procedural instructions. Unlike its underlying markup language, XML, DITA uses a system of specialization and constraints rather than arbitrary extensions, so different DITA projects can make sense of each other’s customizations. This makes it easy for DITA-based projects with different customizations to share topics.

DITA also provides mechanisms for decoupling cross-references from content, making sharing and reuse easier. Using maps to define documents as combinations of topics is one aspect of that decoupling. The other is an indirection method called keys, which enables dependencies to be confined to maps. A topic can refer to another topic -- or even a bit of text -- using a key, and different maps can associate that key with different topics or bits of text. The contents of the topic do not need to change.

While you are free to define your own means of transforming a map and the topics it refers to into a document, most projects build their DITA production on top of the DITA Open Toolkit (www.dita-ot.org), a set of Java-based open source publishing tools. The combination of DITA and the toolkit presents a steep learning curve for most writers, and the available support – not bad, but about average for open source projects – makes the climb even harder. This situation cries out for third-party books, and there are a few.

In the July/August 2014 Micro Review I recommended DITA Best Practices: A Roadmap for Writing, Editing, and Architecting in DITA by Laura Bellamy et al. It’s an excellent book, but I have nothing new to say about it. In the July/August 2006 Micro Review, I wrote about the first edition of the Comtech book Introduction to DITA -- A User Guide to the Darwin Information Typing Architecture. DITA was just out, and the book showed signs of being rushed into print. This time I look at the second edition. Finally, anyone who wants to understand the thinking behind the DITA standard should read Eliot Kimber’s book, DITA for Practitioners, supposedly the first of two volumes, though it has been out for more than three years and he hasn’t started writing the second volume yet. I talk about that book here as well.


DITA for Practitioners, Volume 1: Architecture and Technology by Eliot Kimber (XML Press, Laguna Hills CA, 2012, 348pp, ISBN 978-1-937434-06-9, xmlpress.net, $29.95)

Eliot Kimber really knows DITA. He is a DITA consultant and a voting member of the Oasis DITA Technical Committee. He has written a book for “people who are or will be in some way involved with the design, implementation, or support of DITA-based systems.” The book is not for authors who just want to use DITA, though everyone who works with DITA can benefit from learning its architecture and main technical features. For example, many authors would benefit from understanding the indirect addressing provided by keys, but books aimed mainly at authors usually tiptoe around that topic. Because I have a technical background that includes system architecture and design, this is my favorite DITA book. But I certainly understand why DITA users without that background might prefer books that more specifically target their needs and concerns. JoAnn Hackos’s book, described elsewhere in this column, is closer to that category.

Kimber makes the point that just as there are many XMLs -- making teaching someone to use XML difficult -- there are also many DITAs. Authors of how-to books must pick a specific way of using DITA (usually, something akin to designing topic-based online help systems) before they can provide clear, simple instructions and examples. Kimber’s approach is to survey the architecture as an introduction to the DITA standard, focusing on the parts that might confuse experienced XML practitioners. With that background you can then read the standard. With this approach it might be months before you can apply DITA to your documentation projects, but when you do, you’ll know what you’re doing, why you’re doing it, and how to investigate and correct problems.

Fortunately, Kimber provides an intermediate path. The longest chapter of his book (102 pp) is a tutorial, though a more conceptual than procedural one. It covers all the main steps in producing a DITA-based publication. Reading it exposes you to the main aspects of using DITA. His procedural steps, however, are not always simple and direct. Here, for example, is a step in a procedure to reuse the topics of an online help system to create a printed version:

4. In the DOCTYPE declaration, change “DITA Map//” to “DITA BookMap//” and “map.dtd” to “bookmap.dtd”

Note the uppercase “M” in BookMap.

You don’t actually have to change “map.dtd” to “bookmap.dtd” because you should always be resolving the public ID, not the system ID, for the DTD. But people will get confused if you don’t change it.

The best thing about this book is the sense it gives you of an ongoing technical conversation within the DITA community. For example, in discussing the DITA 1.2 key reference facility, he talks about a limitation in the way DITA constructs the global key space, then adds, “Without [the limitation], we would not have had time to get any indirection facility into DITA 12.” This tells me that key scoping is not a mysterious fact of life, set in stone, but a technical feature that DITA architects continue to try to make more flexible.

Sometimes the conversation goes against Kimber. For example, he notes that many DITA users use keys for variable text like product names. He points out that this implementation falls short of how programmers expect variables to behave and advocates that DITA provide a separate variable mechanism – a position that the rest of the DITA Technical Committee disagrees with. This sort of information is fascinating, but of little use to readers. It is one of the ways in which this book is like no other DITA book.

If you really want to know how DITA works, if the idea of understanding and even participating in this kind of technical conversation appeals to you, you should read this book.



Introduction to DITA, 2nd ed: A User Guide to the Darwin Information Typing Architecture Including DITA 1.2 by JoAnn Hackos (Comtech, Denver CO, 2011, 430pp, ISBN 978-0-9778634-3-3, www.comtech-serv.com, $50.00)

JoAnn Hackos’s name did not appear on the first edition of this book, but she founded Comtech Services in 1978 and has been its leader ever since. She is the author of several well-known and highly respected books on managing technical communication. She is a Fellow and former President of the Society for Technical Communication (STC). She is known for being thorough and methodical – in her books and in highly regarded seminars, workshops, and conferences. Her workshops are expensive, but people seem to find them worth the price.

This book is much more clearly a tutorial than Kimber’s book, but Hackos does not aim just at authors. She includes tutorials for system architects as well. She covers every aspect of setting up and using DITA to support topic-based authoring, but she says little about the technical decisions that underlie the publishing system she helps you set up. She calls the book a reference manual as well as a learning tool; that is true in the sense that most readers will not go through all of the tutorial topics. They will learn the basics, start writing their own documents, then come back for the more advanced parts when they run into something they don’t know how to do.

Hackos spells everything out, and the result uses the print medium inefficiently. This is typical of workshop handouts, which are often distributed as large three-ring binders, but not so common in published books. If you buy this book, you pay extra for the redundancy, but you’re never in doubt about the context of what Hackos is saying.

While Hackos is careful about the technical accuracy of her examples, the text is, surprisingly, not well copyedited. I bought my copy of the book directly from Comtech just a few weeks before I started writing this column – years after this edition came out – but the book still contains errors that a competent editor could have corrected before publication, or even in a subsequent reprinting. Sadly, the lack of editing of technical books is widespread, but given the cost of this one and the prominence of its author, I’m disappointed that the editing isn’t better.

Many readers will find this book too thorough and methodical for their taste. They will be frustrated by the slow pace of the tutorials. But if you persist, you will know the basics, and the later chapters cover material that most how-to DITA books don’t. If you’re new to DITA and you want to buy just one DITA book, this one is a good choice.


Windows 10


Recently Microsoft started bombarding me with notices about Windows 10. I had been running Windows 7 and had seen – and disliked – instances of Windows 8.1. I am usually cautious about operating system upgrades. I wait until the new version has been out a while. But I had heard good things about Windows 10 and sensed that Microsoft was making a special effort, so when the little window popped up at the bottom of my screen to tell me that my free upgrade to Windows 10 was on my machine and ready to install, I said “Go for it!”

I had seen a number of posts about how to respond to the Windows 10 privacy options. So when the installer asked if I’d like the default settings, I said no and turned off anything that seemed at all problematic. I am sure there are other settings that they don’t let you turn off as easily, or at all, but I felt I had done what I could. If you search online for information about Windows 10 privacy settings, you should find lots of guidance.

The installation and startup were the simplest and smoothest I have ever seen, and I have seen all Windows upgrades since version 3.1. When it was done, everything was in place and running, and it was hard to notice the small differences from Windows 7. I have been running Windows 10 for more than a week and have had no trouble. Chrome and Firefox quickly adapted, and it wasn’t hard to turn off Edge (the new Internet Explorer).

Once everything was running, I upgraded Office to Office 2013. That also went relatively smoothly, though I had some trouble with Outlook PST files. Microsoft had known for more than a year, but didn’t bother to tell me, that I had to upgrade them explicitly to Office 2013 format. When I did so, they worked fine.


That one glitch aside, I am amazed at how smoothly it all went. Watch out for the privacy settings, but if you run Windows, be sure to upgrade to Windows 10.

This article appears in slightly different form in the Sept/Oct 2015 issue of IEEE Micro © 2015 IEEE.

Friday, May 1, 2015

Writing Well


This time I review an unusual style guide, but to fully understand it, you should know about -- and I hope look at --  four other books, which I discuss briefly in notes at the end.

The Sense of Style: the Thinking Person's Guide to Writing in the 21st Century by Steven Pinker (Viking, NY, 2014, 368pp, ISBN 978-0-670-02585-5, www.penguin.com, $27.95)

Steven Pinker is a cognitive scientist, linguist, and -- as the dust jacket of his book announces -- public intellectual. He is the author of many well known books, and he chairs the usage panel of the American Heritage Dictionary. With these credentials in hand, he sets out to solve one of the most vexing problems of our day: bad writing. Not just any old bad writing, but bad writing by smart, well educated people with significant things to say.

Pinker loves reading and writing English. He reads style guides and plays with words. The title of his book is a play on two senses of the word "sense." He wants to help you develop an intuition for how to write well, but he also wants to explain how stylistic choices arise from underlying principles of cognitive psychology and an understanding of English grammar. By "grammar" he does not mean the hodge-podge of rules, shibboleths, and hobgoblins formerly taught in schools and still perpetuated by most traditional style guides. He means the research-based discoveries and formulations of Huddleston & Pullum's Cambridge Grammar, which substantially revises the vocabulary of English grammar. If you do not want to invest $250 and many hours of your time to read a 1200-page grammar book, turn to the glossary of Pinker's book for a summary of the grammatical categories and functions that underlie the Cambridge system. Reading that glossary before reading the main text helped me understand Pinker’s arguments more quickly as I went along.

Bad writing and how to fix it


So how does Pinker hope to stanch the torrent of bad writing?  If you want the punch line without Pinker's significant contributions, start by reading Thomas & Turner's Clear and Simple as the Truth. The authors describe the classic style, in which the writer knows the truth about some subject and presents it to the reader without bias, as if in a conversation between equals. The reader may not previously have noticed this truth, but immediately recognizes it. The presentation is like a clear, undistorting window. The writer shows but never tries explicitly to persuade. Pinker says that classic style is the strongest cure he knows of for "the disease that enfeebles academic, bureaucratic, corporate, legal, and official prose."

A great virtue of the classic style is that it describes its subjects with fresh wording and concrete images. Pinker quotes a few paragraphs from a book by physicist Brian Greene to show that the style can be a perfect vehicle for explaining highly complex and abstract topics. Greene makes the abstractions concrete without oversimplifying them.

Incidentally, classic style is close to the style that technical writers aspire to, as exemplified in Jean-luc Doumont's Trees, Maps, and Theorems.  But the styles differ in that technical writers and readers are not engaged in conversations between equals. Readers seek specific information, and technical writers, as experts, provide it. They often use standard, predictable structures to enable readers to find information quickly, while classic style does not dictate specific formats. Also, most technical writers are taught to avoid passive voice, but the classic style freely uses the passive when it improves clarity.

So what is the disease for which classic style is the cure? Pinker calls it the curse of knowledge, a term he borrows from economics. All writing guides tell you to “consider your audience,” but audiences are made of different people with different levels of knowledge. The set of things we can safely assume they know is far smaller than most writers think. As Pinker puts it, "The main cause of incomprehensible prose is the difficulty of imagining what it's like for someone else not to know something that you know." There are other causes, of course, but Pinker argues that the best known suspects (per Calvin and Hobbes) – “to inflate weak ideas, obscure poor reasoning, and inhibit clarity”  are minor contributors, as are stodgy academic style guides.

The curse of knowledge puts specific pitfalls in a writer's path: jargon and abbreviations, chunking, and functional fixity. Every field has its own vocabulary, but replacing jargon with a plain term can often improve the clarity of your prose without making you seem less credible to your peers. Some acronyms and abbreviations can be replaced with their fully spelled out forms -- wasting a little space but helping many readers grasp the material more quickly. Your peers know less than you think they do, and even those who have seen a technical term or abbreviation may not recognize it instantly.

Chunking is gathering simpler concepts into more abstract ones with their own names and properties (for example, the Federal Reserve Bank buys risky mortgages to make bankers’ lives easier, and we refer to that action as "quantitative easing"). Chunking is essential to thinking clearly about complex subjects, but it often leads you to substitute nouns for verbs, thus making prose harder to understand. And if you mention a chunk that a reader doesn't recognize, that reader may be unnecessarily derailed.

Functional fixity is focusing on how you use something, rather than seeing it as the kind of tangible object that classic style calls for. Pinker gives the example of a researcher who showed people sentences followed by the words TRUE or FALSE. In the paper that described this research, the researcher called that action "the subsequent presentation of an assessment word." But research shows that people remember facts presented in concrete terms better than they do the same facts presented abstractly. Pinker suggests, for example, changing a functional phrase like "participants were tested under conditions of good to excellent acoustic isolation" to a concrete phrase like "we tested the students in a quiet room."

One easy antidote to the curse of knowledge is to ask someone else to read what you've written (or, as you should not put it, conduct informal usability studies on your composed output). You don’t have to accept every suggestion -- your friends have blind spots and hobbyhorses too -- but you may be surprised at how hard your prose is for them to understand.

As you strive to overcome the curse of knowledge, your next challenge is to put together comprehensible text. A style of syntax diagramming created in the 1870s was taught in American schools recently enough that many people still remember it and bemoan its loss. Pinker, however, celebrates its loss, because it is unintuitive, ambiguous, and based on an outmoded view of grammar. The Cambridge Grammar syntax diagrams, which Pinker uses, are based on psycholinguistic studies of how people process language. They are the first of the trees Pinker uses to map the words and concepts in our heads into text understandable by others. The syntax trees show how to map the interconnected words in our minds into syntactically correct English sentences. They give Pinker a way to show graphically why some sentences are incorrect or hard to understand and to explain how to correct those problems. They also help him illustrate how poorly some writers of style guides understand English grammar.

One problem made evident by considering syntax diagrams is what Pinker calls garden paths. Here, the same sequence of words might result from two different diagrams. For example, “fat people eat accumulates” has two readings, one of which can be eliminated by inserting the word “that” before “people.” Pinker advocates inserting such “needless words” into sentences to make them clearer. He also advocates reordering techniques to support what he calls monumental principles of composition:

 * Save the heaviest or most difficult information for last.
 * Introduce the topic before commenting on it.
 * If the sentence contains both old and new information, put the old information first.

Chief among these reordering techniques is the passive voice. Pinker recognizes the problems that have given passive voice a bad name, but he also provides examples in which the passive-voice version is clearer and more graceful than active-voice alternatives.

The second kind of tree describes a document and helps us organize our thoughts into coherent arguments. A weak understanding of modern English grammar may give rise to lots of nonsensical stylistic advice, but a bigger cause of bad writing is fuzzy thinking. The document-level trees are outlines of coherent themes, deductions, and generalizations. Even if you don’t commit either kind of tree to paper, keeping them in mind can help you construct texts that readers can easily understand and follow. Incidentally, these trees are essentially the ones Doumont talks about in Trees, Maps, and Theorems.

Document-level trees help solve a problem that Pinker describes as follows: “Even if every sentence in a text is crisp, lucid, and well formed, a succession of them can feel choppy, disjointed, unfocused – in a word, incoherent.” An outline, which Pinker calls a tree lying on its side, shows the hierarchical structure of your ideas, but while English grammar limits word order in sentences, no syntax rules control the order of ideas in a document. Nor must all documents be hierarchical. Sometimes you want to develop several themes in parallel, and even if you have only one theme, the sentences you produce are related to the sentences around them in various ways. You have a complex network of ideas in your head, and you hope that by writing sentences you enable readers to integrate parts of that network into their own mental networks. Pinker uses the term “arcs of coherence” to describe the parts of a document that don’t follow the tree structure but, as he puts it, drape themselves from the limbs of one tree branch to the limbs of another.

To help explain how to construct coherent texts, Pinker focuses on the idea of a topic. The point of a sequence of ideas is the topic. If readers don’t  know the topic of the sentence they are reading, they are no longer on the same page as the writer. Pinker picks apart an incoherent introduction to a highly regarded book to make this point with excruciating clarity.

Pinker refers to Joseph Williams' Style: Toward clarity and grace as a source of practical advice on how to manage the complexity of multiple themes running through a document. One important technique is to call the same thing by the same name. Another is to explain how each theme relates to the topic, so readers understand why you’re talking about it. For example, if you think Jamaica is like Cuba because it is a Caribbean island and that China is like Cuba because it has a communist government, you can’t just write “countries like Jamaica and China” without saying that you’re lumping them together because each shares a characteristic with Cuba.


The style guide


The final third of Pinker’s book is devoted to the topics that arise in traditional style guides: rules of correct grammar, word choice, and punctuation. It gives Pinker a chance to express some of his own pet peeves and to add a little prescriptivist seasoning to the descriptivist underpinnings of the book. This section is not meant to replace the Chicago Manual of Style, but rather to provide data and principles to help you make choices.

Pinker ridicules the supposed war between descriptivists and prescriptivists, in which the prescriptivists fight to stave off the obvious decline of our language, while the descriptivists accelerate the decline by endorsing abominations like ain’t, brang, and can’t get no. According to Pinker, the purpose of prescriptive rules is not to tell people how to speak or write but to codify the tacit conventions of a specialized form of the language, namely, standard written English. While explaining the importance of prescriptive rules, he rejects the idea that “every pet peeve, bit of grammatical folklore, or dimly remembered lesson from Miss Thistlebottom’s classroom is worth keeping.” He calls these bubbe meises, Yiddish for grandmother tales, and he cites their principal sources:

  * English should be like Latin
  * Greek and Latin must not mix
  * Backformations are bad
  * Meanings can’t change (the etymological fallacy)
  * English must be logical

I don’t have room to go into his debunking of these “rules.” Read the book for that.

Pinker provides “a judicious guide to a hundred of the most common issues . . . in style guides, pet peeve lists, . . ..”  He groups the issues into grammar, expressions of quantity and quality, word choice, and punctuation, and he brings his expertise to bear on them.  For example, he talks about problems that arise from the fact that coordination is headless in the syntax tree. Thus Bill Clinton said “Give Al Gore and I a chance to bring America back,” and few people registered it as unusual; if he had said “Give I and Al Gore a chance,” everyone would have been startled.  I found all 100 issue discussions fascinating, and I hope you’ll get the book and read them.

This book is not a traditional style guide. You can’t go to it for definitive rules or cite it to defend your stylistic choices. But it does provide a framework and basis for thinking about stylistic issues. It gave me a lot to think about, and if you want to write English prose, it will probably give you plenty to think about too. I recommend it.


Books referred to


[Doumont] Trees, Maps, and Theorems: Effective communication for rational minds by Jean-luc Doumont (Principiae, 2009)  I reviewed this book in the Sep/Oct 2011 Micro Review. It is still the book to read if you can only read one book about technical communication. Doumont focuses on how to organize and present technical information. He has almost nothing to say about grammar or word choices.

[Huddleston & Pullum] The Cambridge Grammar of the English Language by Rodney Huddleston and Geoffrey Pullum (Cambridge, 2002). The authors describe it as "a synchronic, descriptive grammar of general-purpose, present-day, international Standard English." This would be a good example of the curse of knowledge, but the authors mercifully explain all of those terms.

[Thomas & Turner] Clear and Simple as the Truth: Writing Classic Prose by Francis-Noël Thomas and Mark Turner (Princeton, 1994). Thomas and Turner describe the classic style in terms of the choices it makes about certain basic elements -- like the relationship between reader and writer and whether truth can be known. They provide many examples of classic style and contrast it with styles that differ from it in varying degrees.

[Williams] Style: Toward clarity and grace by Joseph Williams (Chicago, 1990). The author's stated goals are to help writers move from a first draft to a version crafted for readers, diagnose the causes of bad writing and overcome them, and handle complexity. Williams began the work as a textbook and was approached by the University of Chicago Press to make it available to a wider audience. While most popular guides are aimed at beginners, Williams addresses the issues that seasoned writers must master to move to the next level.



This article appears in slightly different form in the May/Jun 2015 issue of IEEE Micro © 2015 IEEE.

Thursday, January 1, 2015

The Future of Work


This time I look at a book that describes the author's experience working for a company with essentially no physical offices and with workers all over the globe. He draws some conclusions about the future of work.

The Year Without Pants: Wordpress.com and the Future of Work by Scott Berkun (Jossey-Bass/Wiley, San Francisco, 2013, 266pp, ISBN 978-1-118-66063-8, www.josseybass.com, $26.95)

In the July/August 2010 Micro Review, I briefly discussed Scott Berkun's  Confessions of a Public Speaker, a book he wrote while trying to make a living as a talking head. But in the 1990s Scott distinguished himself as a development manager at Microsoft, where he was instrumental in making Microsoft's belated embrace of the web and browsers successful. His other books qualify him to be called a management guru, so it was with trepidation that he stepped back into a management job.


The back story

About the time my review of his last book came out, Berkun was a WordPress blogger and a consultant to Matt Mullenweg, the creator of the WordPress blogging software and founder of Automattic (note the extra "t" so the company name includes Mullenweg's given name). Automattic runs wordpress.com, one of the most popular sites in the world. Approximately half of all WordPress-based blogs are hosted there for free. Mullenweg wanted to try a new organizational approach within Automattic. Partly as a result of Berkun's advice, he split the company into ten teams, and he invited Berkun to lead one of them. Berkun agreed to join the company as an employee. Going in he made it clear that he would leave to write this book in approximately a year. He wound up staying for a year and a half, the last few months of which as a team member after recommending that one of his team members be promoted to succeed him.

The book tells a fascinating story -- fascinating because of both the personal details and the company's unique organization. In the early 1980s I read Tracy Kidder's _Soul of a New Machine_, and the personal side of Berkun's book reminds me of Kidder's story. Kidder was a reporter and not a participant, but he did see some of the same dynamics at work as the ones Berkun describes. The workers who were passionate about the goal made the project succeed by working behind the backs of the hard-driving project managers. At Automattic, there are no hard-driving managers, and everything is out in the open -- almost painfully so -- but passion and commitment are the prime motivators.

As a development project leader in the 1960s, I read John Kenneth Galbraith's _New Industrial State_.  Galbraith said many things in that book, but the one I remember nearly 50 years later is that in order to succeed, companies must abandon top-down decision making  and recognize that management will increasingly lack the knowledge needed to make day-to-day operational decisions. In this era of agile organizations, that seems like a quaint insight, but getting from there to here was a long, bumpy ride. Automattic, as described in Berkun's book, seems like the culmination of that journey.


A virtual company

In January 2003 Matt Mullenweg established the WordPress open source community by forking code from b2/cafelog, a GPL-licensed open source project  whose founder had stopped supporting it. Mullenweg's founding principles were transparency, meritocracy, and longevity. In August 2005, distressed about the existing options for deploying WordPress-based blogs, he founded Automattic with three community volunteers and no venture backing. They designed an anti-spam plugin called Akismet -- still one of the first things a new WordPress blogger installs -- and used income from that to keep Automattic afloat until they could obtain more substantial financing. Toni Schneider joined the company as CEO in November 2005, and he and Mullenweg jointly managed a totally flat organization until they created teams in 2010, when the company had 60 employees.

Automattic has a simple business model. They sell upgrades to bloggers who want more than the many features they can get for free. They sell advertising on a few popular blogs, and they work special deals with premier clients like CNN, _Time_ magazine, CBS, and NBC Sports, which host their websites on WordPress.com.

Because of the way the company started, it was completely natural for everyone to work where they pleased. While the company eventually acquired highly desirable premises on Pier 38 in San Francisco, employees rarely used them, though Mullenweg occasionally called on locally based employees to come in as props when media representatives or premier clients came calling.

Mullenweg regards remote working as ideal. It flattens everything, producing higher lows and lower highs -- a generally more mellow experience. Automattic can afford to be a low-friction company because it supports the WordPress community and relies on satisfied customers. It feels little competitive pressure. It doesn't need schedules because it doesn't do marketing. It has minimal hierarchy, so decisions can be made with little fuss.

Most of the time employees communicated on IRC and their team blogs (known as P2s). While email was by no means prohibited, few Automattic employees used it, because it is closed. If you do not receive a copy of an email message, you have no way to find out about it. Every word ever typed on IRC or a P2 is archived and available to every employee.

The whole company held occasional all-hands get-togethers face to face in exotic places, and teams did the same somewhat more frequently. A tradition for these events, which usually lasted several days, was to decide on team projects to develop and publish before going home.


Berkun's role

Berkun's team was called Team Social. Their job was to invent things that made blogging and reading blogs easier. In his year leading that team, they developed Jetpack, a WordPress plugin designed to make wordpress.com features available to WordPress-based blogs hosted on other sites. It's the other first thing a new WordPress blogger installs. They also unified the commenting facilities of all WordPress blogs in order to integrate IntenseDebate, a popular commenting product that Automattic had acquired because it worked on other blogging systems as well.

The integration was called Project Highlander to suggest (a science fiction allusion) that it was a fight to the death between IntenseDebate and the other WordPress commenting facilities until only one survived. With 120 blog themes, WordPress had a large variety of ways of making and presenting comments, and those had to be unified before Project Highlander could succeed.

Project Highlander called on project management skills that Berkun brought to Automattic from his days at Microsoft -- skills that pushed Automattic in the direction of a more mature development process. This was a recurring theme of Berkun's time there. In terms of Eric Raymond's classic book _The Cathedral and the Bazaar_, Automattic had grown up at the Bazaar end of the spectrum. Berkun, based primarily on his time at Microsoft, brought in aspects of the Cathedral approach whenever that was a more effective way to approach a problem. Automattic had 60 employees when Berkun joined and 170 by the time he finished writing this book, so some evolution in the Cathedral direction was inevitable, but Berkun's expertise made it easier.

While embracing the Automattic way of working, Berkun also struggled against it. He had mastered the techniques of face-to-face interaction -- maintaining eye contact, reading body language, detecting emotional nuance, and so forth. He had to learn to compensate for the fact that virtual interactions made most of those techniques nearly impossible to use. In fact, one of his accomplishments was to move team meetings out of IRC and into Skype video.

Another problem Berkun identified, but really had no answer for, is the dynamics of online threads. You might make a thoughtful post about an important issue and see no responses. You have no idea whether anyone has read it or continues to think about it. Or someone might react to one small point in your post, and the thread mutates to focus on that point rather than on the one you set out to direct attention to. Berkun raised this issue by posting about it, and the responses frustratingly exhibited the very problems he hoped to highlight.

Berkun liked the company culture of fixing things immediately, but he noted that people respond to the most recent problem, and if something doesn't get fixed right away, it tends to be forgotten, regardless of its importance. Berkun tried to introduce a system of priorities that would make it more likely that tricky but important issues would not be swept under the rug. He hoped to engender more strategic thinking to go along with the company's tactical mindset.

Berkun also tried to institute some sort of usability testing. The programmers who worked on WordPress features generally came from the WordPress community, so they had reason to feel that they understood their target audience, but Berkun was able to identify many areas where users had difficulties that simple design changes would alleviate.


Highjinks

A major part of the story Berkun tells is about the people he worked with, how they worked together, and how they coalesced into an efficient team. Many of Berkun's anecdotes concern his team's meetings in places like New York, Seattle, and other more exotic places.

Seen from the outside, the team seemed like a bunch of hard-drinking young men, a few years out of college (more than a few in Berkun's case), who enjoyed playing around the edges of trouble. For example, on the way to a bar in Athens after the one they'd been drinking in closed at 2:00 am, one team member miraculously escaped serious injury. On a dare he jumped between 3-foot high traffic bollards spaced 4 feet apart and missed his second jump, crashing toward the sidewalk. As Berkun describes it, "either through Australian training for drunk jumping or a special Krav Maga technique he'd learned, midfall he realized his predicament and managed to tuck and roll . . . The silver-dollar sized patch of skin missing from his elbow seemed a fair price to pay, and he was glad."

Despite this sort of incident, their meetings in exotic places were highly productive. Their time together seemed to fill a need that their usual distributed virtual interactions did not. Oddly, though, when working side by side, they often continued to communicate through IRC and their P2, as if they were continents apart.


Lessons

The first lesson learned from Automattic is that a virtual company can exist and be productive. It's not the only such company; GitHub has a similar distributed structure. But Google, the dominant force in Silicon Valley, believes in co-location and with few exceptions requires employees to work in the office, not remotely. With Marissa Mayer's move from Google to the helm of Yahoo, that meme has taken root at Yahoo as well. Many other Silicon Valley companies have also held that belief for years. Partly, they believe it's a more efficient way to develop software, and partly they don't trust their employees.

Trust is the key. Automattic believes in hiring great people, setting good priorities, granting authority, removing distractions, and staying out of the way. The way Automattic works makes it no harder to detect slackers than if you were looking over their shoulders every minute of the day. But most Automattic employees come from a tradition of working remotely on open-source projects. They are self-sufficient and highly motivated, passionate about what they hope to achieve. Their way of working might not work for everybody, but it works for them.

Berkun believes that Automattic has answered many questions that the working world is afraid to ask. Results trump traditions, and the most dangerous tradition is that work is both serious and meaningless, as exemplified by _Dilbert_. A short definition of work is "something I'd rather not be doing." Automattic's management -- with its vision, mission, and long-term thinking -- may be atypical, but they have given work meaning. Automattic's workers have great freedom and take great pride in their work. And as Berkun's anecdotes show, they have a lot of fun.

This short and seemingly lightweight book actually contains a lot of meat, and I haven't covered all of it here. If you're interested in the future of work, you should read it.

This article appears in slightly different form in the Jan/Feb 2015 issue of IEEE Micro © 2015 IEEE.