Sunday, January 1, 2017

Resistance is Futile

This time I look at a book that claims to reveal the shape of our technological future. The technological climate is changing. Ice is turning to water, and there’s no going back.

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future by Kevin Kelly (Viking, NY, 2016, 336pp, ISBN 978-0-525-42808-4, $28.00,

Kevin Kelly has been on the forefront of the online connected world for more than 30 years. He contributed substantially to the Whole Earth Catalog, the WELL, the Hackers Conference, and Wired Magazine. In his 1994 book, Out of Control: The New Biology of Machines, Social Systems, and the Economic World, he was already exploring some of the themes that appear in The Inevitable. His 2010 book, What Technology Wants, (Micro Review March/April 2011) explores the idea that technologies have characteristics that make them likely to evolve in some directions but unlikely to evolve in others. This is the basis for “inevitable” in his current title. Kelly’s technological forces are what we might call trends, and because he describes them as processes, he assigns a present participle to each. This is a little contrived, as when he uses the participle “cognifying” to describe using artificial intelligence (AI) to make devices or processes more capable.

The trends Kelly describes are in plain sight, but what he reports as already happening on the leading edges of those trends was new to me. He also shows ways in which these trends interact and reinforce each other, magnifying the effect of each. For example, one of his themes is the maturing of virtual reality, which Kelly claims has not become more capable in the last nearly 30 years but has become much cheaper because of technologies developed for mobile phones. High-resolution screens and motion sensors cost a tiny fraction of what they cost in the late 1980s, so inexpensive systems now provide a realistic sense of being somewhere you’re not, while AI and big data enrich the experience.


Since at least as far back as the 1960s, AI has been just around the corner. We’ve turned that corner now, but not in the way many of us feared. Rather than the moody and self-protective HAL 9000 of Arthur C. Clarke’s 2001, we see application-specific bits of intelligence. Driven by our ability to handle big data and our drive to collect and annotate information about everything anyone does, artificial intelligence will be cheap and ubiquitous. As Kelly puts it, “Even a very tiny amount of useful intelligence embedded into an existing process boosts its effectiveness to a whole other level.” He imagines a grid from which you can take as much cheap, reliable AI as you need. Like electricity a century ago, this capability will spawn countless new businesses built on the model of adding AI to some previously unenlightened process. For example, intelligent clothing will communicate with washing machines to control water temperature, the amount of soap, and the intensity of the agitation and spin cycles. More significant than such trivial, but potentially lucrative, applications are AI-enhanced medicine, transportation, weather forecasting, and investing. Such applications exist now and will become more sophisticated tomorrow.

Kelly attributes the success of today’s AI, despite all the false hopes of the past, to the following factors: cheap parallel computation, big data, and better algorithms. Hardware like multi-core microprocessors or massively parallel graphics processing units (GPUs) make possible analytic techniques that would have been prohibitively expensive with older equipment. Huge collections of data about everything from chess games to search results, to tracking cookies make it possible for AI to learn and improve. Hierarchical algorithms, called deep learning, make full use of parallel computation to support such AI successes as IBM’s Watson or Google’s search engine.


Our digital culture is communal to a high degree – socialism, but without the state. Wikipedia, Creative Commons permissions, crowd funding, peer-to-peer loans, Tor, Digg, Reddit. Pinterest, and Tumblr are all examples of it. Many people contribute, and everyone consumes without charge. Apache and Linux have unpaid workforces the size of a small town. Over the last hundred years, free markets solved problems that governments could not. Now collaborative social technology is solving problems that the free market cannot. Google, Facebook, and Twitter depend on such collaborative contributions to provide valuable services free of charge, but make huge amounts of money by using AI and big data to deliver targeted advertising. The bottom-up model of user-generated content is a wonderful way for new ideas to evolve and bloom in niches, but as Wikipedia and other examples show, some top-down curation is necessary as collaborative projects grow and mature.

The nature of bits

Several of Kelly’s trends arise from the nature of bits. Because bits are ephemeral, they are easy to gather, duplicate, and rearrange. From this he deduces the inevitability that they will be gathered, duplicated and rearranged, despite attempts to prohibit these actions. This situation stresses our legal and social systems and changes our way of life. Ownership gives way to access. Like the hunter-gatherers we descended from, we’ll soon own nothing but have access to whatever we need. Solid products give way to fluid services that keep updating. A black touch-tone phone on a desk gives way to a continually updated smart phone with much of its data stored elsewhere. People consume music, movies, and “books” online rather than filling their homes with the concrete embodiments of those things. They actively mix, match, and hyperlink fragments of audio, video, text, and images into new creations that befuddle existing copyright laws. Because copies are free, you must make your living by selling trust, immediacy, personalization, discoverability, and other things that can’t be copied. Bits exhibit a network effect. A bit is more valuable if accompanied by metadata – other bits that describe it. A bit is more valuable if linked to related bits. The cloud draws its value from its ability to support and leverage the nature of bits.


Duplicating and rearranging bits is called remixing. According to Brian Arthur (Micro Review March/April 2011), all new technologies derive from combinations of older ones. The same goes for combinations of bits. Copying, rearranging, annotating, and linking to text is easy because of our tools. Future tools will enable us to do the same with images and video. Kelly envisions the ability to link from an article on Asian clothing to the fez worn by a character in the movie Casablanca. This will depend not just on new tools but on automated assignment of metadata to every bit of information in the cloud – a small extension of what Google already does. Already, trillions of photos are online, and AI has produced filmable 3D images of many things (for example, the Golden Gate Bridge) from those photos. In a gesture toward recognizing intellectual property and ownership amid all this remixing, Kelly alludes to Jefferson’s distinction between a house and an idea. He proposes to distinguish between copying and transforming and to give free license to the latter, a departure from current copyright law.


One form of gathering bits is tracking. Websites, cell phones, social media, and credit cards track our visits and actions, but we also track ourselves. We continually track our exercise, vital signs, and other measurements. This can lead to establishing baselines to support personalized medical treatments. We collect email, record public talks, and may someday, as a few people do now, automatically record all of our interactions. This can give us augmented memories of people, places, conversations, and events – an enhancement of our natural abilities that I’m sure we’ve all wished for at one time or another. In addition to data from tracking ourselves, the internet of things (IOT) gives rise to huge tracking possibilities. Kelly recognizes two models for tracking. In the big brother model, “they” know everything about you. You know very little about them. In the small-town model, tracking is more transparent. You know who’s watching you, and you have a good sense of what they’re planning to do with the information. Bitcoin and public key encryption illustrate the small-town model.

Kelly imagines a slider you can use to control the balance between privacy and openness in your public dealings, and he points out that most people seem to prefer openness. This is because the more you reveal of yourself, the more personally others can treat you. The more private you are, the more generic the services you receive. When it comes to anonymity, the ultimate privacy, Kelly says it’s like a heavy metal – essential to your nutrition, but fatal if you get too much. Everyone who has read anonymous online comments knows this. Privacy depends on trust, which requires a persistent identity.

I find Kelly’s vision of the future of tracking a little ominous. He sees the volume of tracked data growing to the size of an elephant, compared with the mote of dust we track now. This qualitative difference puts it beyond what humans can comprehend – if it isn’t already. He sees all of it reorganized into structures that only machines and AI can work with. They will parse this huge body of information into tiny elements and recombine them in unimagined ways. How we will relate to this planet-sized machine is unclear.


Our attention is a scarce commodity. Humans have limited capacity, and there is little we can do about that. According to Kelly, each year brings 8 million new songs, 2 million books, 16,000 films, 30 billion blog posts, and 182 billion tweets. Many filters are available – for example, the Amazon or Netflix recommendation engines, driven by big data and AI – but even if you filter out everything that isn’t perfect for you, you still don’t have time to consume it all. And behind the scenes, a gigantic filtering mechanism matches advertisers with opportunities, trying to show you ads you’re likely to respond to. This is a multi-billion-dollar industry.

Filtering can lead to the kinds of sharp divisions seen in political systems. If different groups see only material that comports with their views, those groups might stop listening to each other and ultimately live in separate realities. Kelly offers no answer to this problem.


This is a technology book, not a political one, but as anyone who paid attention to the 2016 US election knows, people care about jobs. Kelly believes that humans should not do anything that machines can do. Rather we should work alongside robots, treating them, a la McLuhan, as extensions of ourselves. He is sure that we will dream up new jobs that we can’t imagine now, just as farmers of the early 1800s could not imagine the jobs of their twenty-first century progeny. This may be true, and perhaps that’s all he needs to say, but it certainly raises questions. For example, can an economic system that efficiently allocates scarce resources adapt to the case in which resources are plentiful and necessary jobs are few?


Raising questions ties to one of Kelly’s themes. Today, humans ask the internet two trillion questions each year and get good answers – a service that’s valuable but free. That number will grow rapidly as technology enables answers to more personal questions like “Where’s Jenny?” or “When is the next bus?” Kelly thinks search will become an essential universal commodity in the next few years. But answers are not as important as questions. Kelly quotes Picasso as saying in 1964 that computers are useless because they only give answers. Someday computers may be good at asking questions, but for now, this is one of the jobs Kelly reserves for humans. For example, humans will probably always be better at asking, and answering, questions about what humans would like to do with their free time or how they’d like to use each new technology. Kelly makes an analogy between surfing the internet and dreaming. Both feature quick changes of focus and mix the real and unreal. Both blur the distinction between work and play. Both seem like a waste of time, but both can lead to novel juxtapositions of ideas. Ultimately, they can engender questions as profound as Einstein’s asking what you would see if you were travelling on a beam of light. It may be a long time before machines can ask questions like that.

The holos

Kelly’s timeline for most of the inevitabilities he discusses is the next 30 years, but he describes this as the beginning of a century-long process. All his trends merge into one large invention: a new mind for our old species. The new mind has planetary scope and gives us perfect search and total recall. He calls it the holos and defines it as “the collective intelligence of all humans combined with the collective behavior of all machines, plus the intelligence of nature, plus whatever behavior emerges from this whole.” The hardware of the holos already comprises a sextillion transistors, a trillion times the number of neurons in a human brain. And everyone who surfs the web teaches the holos something about what we consider important. By 2025, Kelly estimates, 100% of the planet’s population will have nearly free access to the holos.

Kelly likens these developments to a phase change, like the transition from ice to water. He rejects the term “singularity” in the sense of an exponential growth of AI that makes humans irrelevant. Instead he sees a symbiotic relationship between humans and technology. The details of how it works are unknowable, but the general direction, in his view, is unmistakable. Time will tell whether he is right or wrong, but reading his book is an eye-opening adventure. I recommend it highly.

This article appears in slightly different form in the Jan/Feb 2017 issue of IEEE Micro © 2017 IEEE.

Tuesday, September 1, 2015


The Darwin Information Typing Architecture (DITA) emerged from a long line of internal IBM projects based on an earlier markup language called SGML. Approximately 10 years ago, IBM bequeathed DITA to the world as an open source project. You can read all about it at DITA is based on the idea of semantic markup, that is, embedding metadata in a document to describe the structural roles of its elements without prescribing formatting for those elements. This has many benefits but can impose a large overhead cost on writing projects. Managers of small projects find it hard to justify that overhead, but tools keep getting better and simpler.

DITA is highly flexible, but in its most common use it marries semantic markup with another long-developing trend in technical writing: topic-based writing, that is, writing small, independent topics that can be assembled into documents and help systems by means of external structural descriptions called maps. This enables reuse and single-sourcing. When the resulting material needs to be translated into additional languages, this approach can save large sums of money. If you say something in only one place, then you don’t have to translate many similar versions of the same information.

DITA’s version of topic-based writing rests on the idea that each topic can consist purely of one type of information: concepts, reference material, or procedural instructions. Unlike its underlying markup language, XML, DITA uses a system of specialization and constraints rather than arbitrary extensions, so different DITA projects can make sense of each other’s customizations. This makes it easy for DITA-based projects with different customizations to share topics.

DITA also provides mechanisms for decoupling cross-references from content, making sharing and reuse easier. Using maps to define documents as combinations of topics is one aspect of that decoupling. The other is an indirection method called keys, which enables dependencies to be confined to maps. A topic can refer to another topic -- or even a bit of text -- using a key, and different maps can associate that key with different topics or bits of text. The contents of the topic do not need to change.

While you are free to define your own means of transforming a map and the topics it refers to into a document, most projects build their DITA production on top of the DITA Open Toolkit (, a set of Java-based open source publishing tools. The combination of DITA and the toolkit presents a steep learning curve for most writers, and the available support – not bad, but about average for open source projects – makes the climb even harder. This situation cries out for third-party books, and there are a few.

In the July/August 2014 Micro Review I recommended DITA Best Practices: A Roadmap for Writing, Editing, and Architecting in DITA by Laura Bellamy et al. It’s an excellent book, but I have nothing new to say about it. In the July/August 2006 Micro Review, I wrote about the first edition of the Comtech book Introduction to DITA -- A User Guide to the Darwin Information Typing Architecture. DITA was just out, and the book showed signs of being rushed into print. This time I look at the second edition. Finally, anyone who wants to understand the thinking behind the DITA standard should read Eliot Kimber’s book, DITA for Practitioners, supposedly the first of two volumes, though it has been out for more than three years and he hasn’t started writing the second volume yet. I talk about that book here as well.

DITA for Practitioners, Volume 1: Architecture and Technology by Eliot Kimber (XML Press, Laguna Hills CA, 2012, 348pp, ISBN 978-1-937434-06-9,, $29.95)

Eliot Kimber really knows DITA. He is a DITA consultant and a voting member of the Oasis DITA Technical Committee. He has written a book for “people who are or will be in some way involved with the design, implementation, or support of DITA-based systems.” The book is not for authors who just want to use DITA, though everyone who works with DITA can benefit from learning its architecture and main technical features. For example, many authors would benefit from understanding the indirect addressing provided by keys, but books aimed mainly at authors usually tiptoe around that topic. Because I have a technical background that includes system architecture and design, this is my favorite DITA book. But I certainly understand why DITA users without that background might prefer books that more specifically target their needs and concerns. JoAnn Hackos’s book, described elsewhere in this column, is closer to that category.

Kimber makes the point that just as there are many XMLs -- making teaching someone to use XML difficult -- there are also many DITAs. Authors of how-to books must pick a specific way of using DITA (usually, something akin to designing topic-based online help systems) before they can provide clear, simple instructions and examples. Kimber’s approach is to survey the architecture as an introduction to the DITA standard, focusing on the parts that might confuse experienced XML practitioners. With that background you can then read the standard. With this approach it might be months before you can apply DITA to your documentation projects, but when you do, you’ll know what you’re doing, why you’re doing it, and how to investigate and correct problems.

Fortunately, Kimber provides an intermediate path. The longest chapter of his book (102 pp) is a tutorial, though a more conceptual than procedural one. It covers all the main steps in producing a DITA-based publication. Reading it exposes you to the main aspects of using DITA. His procedural steps, however, are not always simple and direct. Here, for example, is a step in a procedure to reuse the topics of an online help system to create a printed version:

4. In the DOCTYPE declaration, change “DITA Map//” to “DITA BookMap//” and “map.dtd” to “bookmap.dtd”

Note the uppercase “M” in BookMap.

You don’t actually have to change “map.dtd” to “bookmap.dtd” because you should always be resolving the public ID, not the system ID, for the DTD. But people will get confused if you don’t change it.

The best thing about this book is the sense it gives you of an ongoing technical conversation within the DITA community. For example, in discussing the DITA 1.2 key reference facility, he talks about a limitation in the way DITA constructs the global key space, then adds, “Without [the limitation], we would not have had time to get any indirection facility into DITA 12.” This tells me that key scoping is not a mysterious fact of life, set in stone, but a technical feature that DITA architects continue to try to make more flexible.

Sometimes the conversation goes against Kimber. For example, he notes that many DITA users use keys for variable text like product names. He points out that this implementation falls short of how programmers expect variables to behave and advocates that DITA provide a separate variable mechanism – a position that the rest of the DITA Technical Committee disagrees with. This sort of information is fascinating, but of little use to readers. It is one of the ways in which this book is like no other DITA book.

If you really want to know how DITA works, if the idea of understanding and even participating in this kind of technical conversation appeals to you, you should read this book.

Introduction to DITA, 2nd ed: A User Guide to the Darwin Information Typing Architecture Including DITA 1.2 by JoAnn Hackos (Comtech, Denver CO, 2011, 430pp, ISBN 978-0-9778634-3-3,, $50.00)

JoAnn Hackos’s name did not appear on the first edition of this book, but she founded Comtech Services in 1978 and has been its leader ever since. She is the author of several well-known and highly respected books on managing technical communication. She is a Fellow and former President of the Society for Technical Communication (STC). She is known for being thorough and methodical – in her books and in highly regarded seminars, workshops, and conferences. Her workshops are expensive, but people seem to find them worth the price.

This book is much more clearly a tutorial than Kimber’s book, but Hackos does not aim just at authors. She includes tutorials for system architects as well. She covers every aspect of setting up and using DITA to support topic-based authoring, but she says little about the technical decisions that underlie the publishing system she helps you set up. She calls the book a reference manual as well as a learning tool; that is true in the sense that most readers will not go through all of the tutorial topics. They will learn the basics, start writing their own documents, then come back for the more advanced parts when they run into something they don’t know how to do.

Hackos spells everything out, and the result uses the print medium inefficiently. This is typical of workshop handouts, which are often distributed as large three-ring binders, but not so common in published books. If you buy this book, you pay extra for the redundancy, but you’re never in doubt about the context of what Hackos is saying.

While Hackos is careful about the technical accuracy of her examples, the text is, surprisingly, not well copyedited. I bought my copy of the book directly from Comtech just a few weeks before I started writing this column – years after this edition came out – but the book still contains errors that a competent editor could have corrected before publication, or even in a subsequent reprinting. Sadly, the lack of editing of technical books is widespread, but given the cost of this one and the prominence of its author, I’m disappointed that the editing isn’t better.

Many readers will find this book too thorough and methodical for their taste. They will be frustrated by the slow pace of the tutorials. But if you persist, you will know the basics, and the later chapters cover material that most how-to DITA books don’t. If you’re new to DITA and you want to buy just one DITA book, this one is a good choice.

Windows 10

Recently Microsoft started bombarding me with notices about Windows 10. I had been running Windows 7 and had seen – and disliked – instances of Windows 8.1. I am usually cautious about operating system upgrades. I wait until the new version has been out a while. But I had heard good things about Windows 10 and sensed that Microsoft was making a special effort, so when the little window popped up at the bottom of my screen to tell me that my free upgrade to Windows 10 was on my machine and ready to install, I said “Go for it!”

I had seen a number of posts about how to respond to the Windows 10 privacy options. So when the installer asked if I’d like the default settings, I said no and turned off anything that seemed at all problematic. I am sure there are other settings that they don’t let you turn off as easily, or at all, but I felt I had done what I could. If you search online for information about Windows 10 privacy settings, you should find lots of guidance.

The installation and startup were the simplest and smoothest I have ever seen, and I have seen all Windows upgrades since version 3.1. When it was done, everything was in place and running, and it was hard to notice the small differences from Windows 7. I have been running Windows 10 for more than a week and have had no trouble. Chrome and Firefox quickly adapted, and it wasn’t hard to turn off Edge (the new Internet Explorer).

Once everything was running, I upgraded Office to Office 2013. That also went relatively smoothly, though I had some trouble with Outlook PST files. Microsoft had known for more than a year, but didn’t bother to tell me, that I had to upgrade them explicitly to Office 2013 format. When I did so, they worked fine.

That one glitch aside, I am amazed at how smoothly it all went. Watch out for the privacy settings, but if you run Windows, be sure to upgrade to Windows 10.

This article appears in slightly different form in the Sept/Oct 2015 issue of IEEE Micro © 2015 IEEE.

Friday, May 1, 2015

Writing Well

This time I review an unusual style guide, but to fully understand it, you should know about -- and I hope look at --  four other books, which I discuss briefly in notes at the end.

The Sense of Style: the Thinking Person's Guide to Writing in the 21st Century by Steven Pinker (Viking, NY, 2014, 368pp, ISBN 978-0-670-02585-5,, $27.95)

Steven Pinker is a cognitive scientist, linguist, and -- as the dust jacket of his book announces -- public intellectual. He is the author of many well known books, and he chairs the usage panel of the American Heritage Dictionary. With these credentials in hand, he sets out to solve one of the most vexing problems of our day: bad writing. Not just any old bad writing, but bad writing by smart, well educated people with significant things to say.

Pinker loves reading and writing English. He reads style guides and plays with words. The title of his book is a play on two senses of the word "sense." He wants to help you develop an intuition for how to write well, but he also wants to explain how stylistic choices arise from underlying principles of cognitive psychology and an understanding of English grammar. By "grammar" he does not mean the hodge-podge of rules, shibboleths, and hobgoblins formerly taught in schools and still perpetuated by most traditional style guides. He means the research-based discoveries and formulations of Huddleston & Pullum's Cambridge Grammar, which substantially revises the vocabulary of English grammar. If you do not want to invest $250 and many hours of your time to read a 1200-page grammar book, turn to the glossary of Pinker's book for a summary of the grammatical categories and functions that underlie the Cambridge system. Reading that glossary before reading the main text helped me understand Pinker’s arguments more quickly as I went along.

Bad writing and how to fix it

So how does Pinker hope to stanch the torrent of bad writing?  If you want the punch line without Pinker's significant contributions, start by reading Thomas & Turner's Clear and Simple as the Truth. The authors describe the classic style, in which the writer knows the truth about some subject and presents it to the reader without bias, as if in a conversation between equals. The reader may not previously have noticed this truth, but immediately recognizes it. The presentation is like a clear, undistorting window. The writer shows but never tries explicitly to persuade. Pinker says that classic style is the strongest cure he knows of for "the disease that enfeebles academic, bureaucratic, corporate, legal, and official prose."

A great virtue of the classic style is that it describes its subjects with fresh wording and concrete images. Pinker quotes a few paragraphs from a book by physicist Brian Greene to show that the style can be a perfect vehicle for explaining highly complex and abstract topics. Greene makes the abstractions concrete without oversimplifying them.

Incidentally, classic style is close to the style that technical writers aspire to, as exemplified in Jean-luc Doumont's Trees, Maps, and Theorems.  But the styles differ in that technical writers and readers are not engaged in conversations between equals. Readers seek specific information, and technical writers, as experts, provide it. They often use standard, predictable structures to enable readers to find information quickly, while classic style does not dictate specific formats. Also, most technical writers are taught to avoid passive voice, but the classic style freely uses the passive when it improves clarity.

So what is the disease for which classic style is the cure? Pinker calls it the curse of knowledge, a term he borrows from economics. All writing guides tell you to “consider your audience,” but audiences are made of different people with different levels of knowledge. The set of things we can safely assume they know is far smaller than most writers think. As Pinker puts it, "The main cause of incomprehensible prose is the difficulty of imagining what it's like for someone else not to know something that you know." There are other causes, of course, but Pinker argues that the best known suspects – in the words of a Calvin and Hobbes cartoon, “to inflate weak ideas, obscure poor reasoning, and inhibit clarity”  -- are minor contributors, as are stodgy academic style guides.

The curse of knowledge puts specific pitfalls in a writer's path: jargon and abbreviations, chunking, and functional fixity. Every field has its own vocabulary, but replacing jargon with a plain term can often improve the clarity of your prose without making you seem less credible to your peers. Some acronyms and abbreviations can be replaced with their fully spelled out forms -- wasting a little space but helping many readers grasp the material more quickly . Your peers know less than you think they do, and even those who have seen a technical term or abbreviation may not recognize it instantly.

Chunking is gathering simpler concepts into more abstract ones with their own names and properties (for example, the Federal Reserve Bank buys risky mortgages to make bankers’ lives easier, and we refer to that action as "quantitative easing"). Chunking is essential to thinking clearly about complex subjects, but it often leads you to substitute nouns for verbs, thus making prose harder to understand. And if you mention a chunk that a reader doesn't recognize, that reader may be unnecessarily derailed.

Functional fixity is focusing on how you use something, rather than seeing it as the kind of tangible object that classic style calls for. Pinker gives the example of a researcher who showed people sentences followed by the words TRUE or FALSE. In the paper that described this research, the researcher called that action "the subsequent presentation of an assessment word." But research shows that people remember facts presented in concrete terms better than they do the same facts presented abstractly. Pinker suggests, for example, changing a functional phrase like "participants were tested under conditions of good to excellent acoustic isolation" to a concrete phrase like "we tested the students in a quiet room."

One easy antidote to the curse of knowledge is to ask someone else to read what you've written (or, as you should not put it, conduct informal usability studies on your composed output). You don’t have to accept every suggestion -- your friends have blind spots and hobbyhorses too -- but you may be surprised at how hard your prose is for them to understand.

As you strive to overcome the curse of knowledge, your next challenge is to put together comprehensible text. A style of syntax diagramming created in the 1870s was taught in American schools recently enough that many people still remember it and bemoan its loss. Pinker, however, celebrates its loss, because it is unintuitive, ambiguous, and based on an outmoded view of grammar. The Cambridge Grammar syntax diagrams, which Pinker uses, are based on psycholinguistic studies of how people process language. They are the first of the trees Pinker uses to map the words and concepts in our heads into text understandable by others. The syntax trees show how to map the interconnected words in our minds into syntactically correct English sentences. They give Pinker a way to show graphically why some sentences are incorrect or hard to understand and to explain how to correct those problems. They also help him illustrate how poorly some writers of style guides understand English grammar.

One problem made evident by considering syntax diagrams is what Pinker calls garden paths. Here, the same sequence of words might result from two different diagrams. For example, “fat people eat accumulates” has two readings, one of which can be eliminated by inserting the word “that” before “people.” Pinker advocates inserting such “needless words” into sentences to make them clearer. He also advocates reordering techniques to support what he calls monumental principles of composition:

 * Save the heaviest or most difficult information for last.
 * Introduce the topic before commenting on it.
 * If the sentence contains both old and new information, put the old information first.

Chief among these reordering techniques is the passive voice. Pinker recognizes the problems that have given passive voice a bad name, but he also provides examples in which the passive-voice version is clearer and more graceful than active-voice alternatives.

The second kind of tree describes a document and helps us organize our thoughts into coherent arguments. A weak understanding of modern English grammar may give rise to lots of nonsensical stylistic advice, but a bigger cause of bad writing is fuzzy thinking. The document-level trees are outlines of coherent themes, deductions, and generalizations. Even if you don’t commit either kind of tree to paper, keeping them in mind can help you construct texts that readers can easily understand and follow. Incidentally, these trees are essentially the ones Doumont talks about in Trees, Maps, and Theorems.

Document-level trees help solve a problem that Pinker describes as follows: “Even if every sentence in a text is crisp, lucid, and well formed, a succession of them can feel choppy, disjointed, unfocused – in a word, incoherent.” An outline, which Pinker calls a tree lying on its side, shows the hierarchical structure of your ideas, but while English grammar limits word order in sentences, no syntax rules control the order of ideas in a document. Nor must all documents be hierarchical. Sometimes you want to develop several themes in parallel, and even if you have only one theme, the sentences you produce are related to the sentences around them in various ways. You have a complex network of ideas in your head, and you hope that by writing sentences you enable readers to integrate parts of that network into their own mental networks. Pinker uses the term “arcs of coherence” to describe the parts of a document that don’t follow the tree structure but, as he puts it, drape themselves from the limbs of one tree branch to the limbs of another.

To help explain how to construct coherent texts, Pinker focuses on the idea of a topic. The point of a sequence of ideas is the topic. If readers don’t  know the topic of the sentence they are reading, they are no longer on the same page as the writer. Pinker picks apart an incoherent introduction to a highly regarded book to make this point with excruciating clarity.

Pinker refers to Joseph Williams' Style: Toward clarity and grace as a source of practical advice on how to manage the complexity of multiple themes running through a document. One important technique is to call the same thing by the same name. Another is to explain how each theme relates to the topic, so readers understand why you’re talking about it. For example, if you think Jamaica is like Cuba because it is a Caribbean island and that China is like Cuba because it has a communist government, you can’t just write “countries like Jamaica and China” without saying that you’re lumping them together because each shares a characteristic with Cuba.

The style guide

The final third of Pinker’s book is devoted to the topics that arise in traditional style guides: rules of correct grammar, word choice, and punctuation. It gives Pinker a chance to express some of his own pet peeves and to add a little prescriptivist seasoning to the descriptivist underpinnings of the book. This section is not meant to replace the Chicago Manual of Style, but rather to provide data and principles to help you make choices.

Pinker ridicules the supposed war between descriptivists and prescriptivists, in which the prescriptivists fight to stave off the obvious decline of our language, while the descriptivists accelerate the decline by endorsing abominations like ain’t, brang, and can’t get no. According to Pinker, the purpose of prescriptive rules is not to tell people how to speak or write but to codify the tacit conventions of a specialized form of the language, namely, standard written English. While explaining the importance of prescriptive rules, he rejects the idea that “every pet peeve, bit of grammatical folklore, or dimly remembered lesson from Miss Thistlebottom’s classroom is worth keeping.” He calls these bubbe meises,  Yiddish for grandmother tales, and he cites their principal sources:

  * English should be like Latin
  * Greek and Latin must not mix
  * Backformations are bad
  * Meanings can’t change (the etymological fallacy)
  * English must be logical

I don’t have room to go into his debunking of these “rules.” Read the book for that.

Pinker provides “a judicious guide to a hundred of the most common issues . . . in style guides, pet peeve lists, . . ..”  He groups the issues into grammar, expressions of quantity and quality, word choice, and punctuation, and he brings his expertise to bear on them.  For example, he talks about problems that arise from the fact that coordination is headless in the syntax tree. Thus Bill Clinton said “Give Al Gore and I a chance to bring America back,” and few people registered it as unusual; if he had said “Give I and Al Gore a chance,” everyone would have been startled.  I found all 100 issue discussions fascinating, and I hope you’ll get the book and read them.

This book is not a traditional style guide. You can’t go to it for definitive rules or cite it to defend your stylistic choices. But it does provide a framework and basis for thinking about stylistic issues. It gave me a lot to think about, and if you want to write English prose, it will probably give you plenty to think about too. I recommend it.

Books referred to

[Doumont] Trees, Maps, and Theorems: Effective communication for rational minds by Jean-luc Doumont (Principiae, 2009)  I reviewed this book in the Sep/Oct 2011 Micro Review. It is still the book to read if you can only read one book about technical communication. Doumont focuses on how to organize and present technical information. He has almost nothing to say about grammar or word choices.

[Huddleston & Pullum] The Cambridge Grammar of the English Language by Rodney Huddleston and Geoffrey Pullum (Cambridge, 2002). The authors describe it as "a synchronic, descriptive grammar of general-purpose, present-day, international Standard English." This would be a good example of the curse of knowledge, but the authors mercifully explain all of those terms.

[Thomas & Turner] Clear and Simple as the Truth: Writing Classic Prose by Francis-Noël Thomas and Mark Turner (Princeton, 1994). Thomas and Turner describe the classic style in terms of the choices it makes about certain basic elements -- like the relationship between reader and writer and whether truth can be known. They provide many examples of classic style and contrast it with styles that differ from it in varying degrees.

[Williams] Style: Toward clarity and grace by Joseph Williams (Chicago, 1990). The author's stated goals are to help writers move from a first draft to a version crafted for readers, diagnose the causes of bad writing and overcome them, and handle complexity. Williams began the work as a textbook and was approached by the University of Chicago Press to make it available to a wider audience. While most popular guides are aimed at beginners, Williams addresses the issues that seasoned writers must master to move to the next level.

This article appears in slightly different form in the May/Jun 2015 issue of IEEE Micro © 2015 IEEE.

Thursday, January 1, 2015

The Future of Work

This time I look at a book that describes the author's experience working for a company with essentially no physical offices and with workers all over the globe. He draws some conclusions about the future of work.

The Year Without Pants: and the Future of Work by Scott Berkun (Jossey-Bass/Wiley, San Francisco, 2013, 266pp, ISBN 978-1-118-66063-8,, $26.95)

In the July/August 2010 Micro Review, I briefly discussed Scott Berkun's  Confessions of a Public Speaker, a book he wrote while trying to make a living as a talking head. But in the 1990s Scott distinguished himself as a development manager at Microsoft, where he was instrumental in making Microsoft's belated embrace of the web and browsers successful. His other books qualify him to be called a management guru, so it was with trepidation that he stepped back into a management job.

The back story

About the time my review of his last book came out, Berkun was a WordPress blogger and a consultant to Matt Mullenweg, the creator of the WordPress blogging software and founder of Automattic (note the extra "t" so the company name includes Mullenweg's given name). Automattic runs, one of the most popular sites in the world. Approximately half of all WordPress-based blogs are hosted there for free. Mullenweg wanted to try a new organizational approach within Automattic. Partly as a result of Berkun's advice, he split the company into ten teams, and he invited Berkun to lead one of them. Berkun agreed to join the company as an employee. Going in he made it clear that he would leave to write this book in approximately a year. He wound up staying for a year and a half, the last few months of which as a team member after recommending that one of his team members be promoted to succeed him.

The book tells a fascinating story -- fascinating because of both the personal details and the company's unique organization. In the early 1980s I read Tracy Kidder's _Soul of a New Machine_, and the personal side of Berkun's book reminds me of Kidder's story. Kidder was a reporter and not a participant, but he did see some of the same dynamics at work as the ones Berkun describes. The workers who were passionate about the goal made the project succeed by working behind the backs of the hard-driving project managers. At Automattic, there are no hard-driving managers, and everything is out in the open -- almost painfully so -- but passion and commitment are the prime motivators.

As a development project leader in the 1960s, I read John Kenneth Galbraith's _New Industrial State_.  Galbraith said many things in that book, but the one I remember nearly 50 years later is that in order to succeed, companies must abandon top-down decision making  and recognize that management will increasingly lack the knowledge needed to make day-to-day operational decisions. In this era of agile organizations, that seems like a quaint insight, but getting from there to here was a long, bumpy ride. Automattic, as described in Berkun's book, seems like the culmination of that journey.

A virtual company

In January 2003 Matt Mullenweg established the WordPress open source community by forking code from b2/cafelog, a GPL-licensed open source project  whose founder had stopped supporting it. Mullenweg's founding principles were transparency, meritocracy, and longevity. In August 2005, distressed about the existing options for deploying WordPress-based blogs, he founded Automattic with three community volunteers and no venture backing. They designed an anti-spam plugin called Akismet -- still one of the first things a new WordPress blogger installs -- and used income from that to keep Automattic afloat until they could obtain more substantial financing. Toni Schneider joined the company as CEO in November 2005, and he and Mullenweg jointly managed a totally flat organization until they created teams in 2010, when the company had 60 employees.

Automattic has a simple business model. They sell upgrades to bloggers who want more than the many features they can get for free. They sell advertising on a few popular blogs, and they work special deals with premier clients like CNN, _Time_ magazine, CBS, and NBC Sports, which host their websites on

Because of the way the company started, it was completely natural for everyone to work where they pleased. While the company eventually acquired highly desirable premises on Pier 38 in San Francisco, employees rarely used them, though Mullenweg occasionally called on locally based employees to come in as props when media representatives or premier clients came calling.

Mullenweg regards remote working as ideal. It flattens everything, producing higher lows and lower highs -- a generally more mellow experience. Automattic can afford to be a low-friction company because it supports the WordPress community and relies on satisfied customers. It feels little competitive pressure. It doesn't need schedules because it doesn't do marketing. It has minimal hierarchy, so decisions can be made with little fuss.

Most of the time employees communicated on IRC and their team blogs (known as P2s). While email was by no means prohibited, few Automattic employees used it, because it is closed. If you do not receive a copy of an email message, you have no way to find out about it. Every word ever typed on IRC or a P2 is archived and available to every employee.

The whole company held occasional all-hands get-togethers face to face in exotic places, and teams did the same somewhat more frequently. A tradition for these events, which usually lasted several days, was to decide on team projects to develop and publish before going home.

Berkun's role

Berkun's team was called Team Social. Their job was to invent things that made blogging and reading blogs easier. In his year leading that team, they developed Jetpack, a WordPress plugin designed to make features available to WordPress-based blogs hosted on other sites. It's the other first thing a new WordPress blogger installs. They also unified the commenting facilities of all WordPress blogs in order to integrate IntenseDebate, a popular commenting product that Automattic had acquired because it worked on other blogging systems as well.

The integration was called Project Highlander to suggest (a science fiction allusion) that it was a fight to the death between IntenseDebate and the other WordPress commenting facilities until only one survived. With 120 blog themes, WordPress had a large variety of ways of making and presenting comments, and those had to be unified before Project Highlander could succeed.

Project Highlander called on project management skills that Berkun brought to Automattic from his days at Microsoft -- skills that pushed Automattic in the direction of a more mature development process. This was a recurring theme of Berkun's time there. In terms of Eric Raymond's classic book _The Cathedral and the Bazaar_, Automattic had grown up at the Bazaar end of the spectrum. Berkun, based primarily on his time at Microsoft, brought in aspects of the Cathedral approach whenever that was a more effective way to approach a problem. Automattic had 60 employees when Berkun joined and 170 by the time he finished writing this book, so some evolution in the Cathedral direction was inevitable, but Berkun's expertise made it easier.

While embracing the Automattic way of working, Berkun also struggled against it. He had mastered the techniques of face-to-face interaction -- maintaining eye contact, reading body language, detecting emotional nuance, and so forth. He had to learn to compensate for the fact that virtual interactions made most of those techniques nearly impossible to use. In fact, one of his accomplishments was to move team meetings out of IRC and into Skype video.

Another problem Berkun identified, but really had no answer for, is the dynamics of online threads. You might make a thoughtful post about an important issue and see no responses. You have no idea whether anyone has read it or continues to think about it. Or someone might react to one small point in your post, and the thread mutates to focus on that point rather than on the one you set out to direct attention to. Berkun raised this issue by posting about it, and the responses frustratingly exhibited the very problems he hoped to highlight.

Berkun liked the company culture of fixing things immediately, but he noted that people respond to the most recent problem, and if something doesn't get fixed right away, it tends to be forgotten, regardless of its importance. Berkun tried to introduce a system of priorities that would make it more likely that tricky but important issues would not be swept under the rug. He hoped to engender more strategic thinking to go along with the company's tactical mindset.

Berkun also tried to institute some sort of usability testing. The programmers who worked on WordPress features generally came from the WordPress community, so they had reason to feel that they understood their target audience, but Berkun was able to identify many areas where users had difficulties that simple design changes would alleviate.


A major part of the story Berkun tells is about the people he worked with, how they worked together, and how they coalesced into an efficient team. Many of Berkun's anecdotes concern his team's meetings in places like New York, Seattle, and other more exotic places.

Seen from the outside, the team seemed like a bunch of hard-drinking young men, a few years out of college (more than a few in Berkun's case), who enjoyed playing around the edges of trouble. For example, on the way to a bar in Athens after the one they'd been drinking in closed at 2:00 am, one team member miraculously escaped serious injury. On a dare he jumped between 3-foot high traffic bollards spaced 4 feet apart and missed his second jump, crashing toward the sidewalk. As Berkun describes it, "either through Australian training for drunk jumping or a special Krav Maga technique he'd learned, midfall he realized his predicament and managed to tuck and roll . . . The silver-dollar sized patch of skin missing from his elbow seemed a fair price to pay, and he was glad."

Despite this sort of incident, their meetings in exotic places were highly productive. Their time together seemed to fill a need that their usual distributed virtual interactions did not. Oddly, though, when working side by side, they often continued to communicate through IRC and their P2, as if they were continents apart.


The first lesson learned from Automattic is that a virtual company can exist and be productive. It's not the only such company; GitHub has a similar distributed structure. But Google, the dominant force in Silicon Valley, believes in co-location and with few exceptions requires employees to work in the office, not remotely. With Marissa Mayer's move from Google to the helm of Yahoo, that meme has taken root at Yahoo as well. Many other Silicon Valley companies have also held that belief for years. Partly, they believe it's a more efficient way to develop software, and partly they don't trust their employees.

Trust is the key. Automattic believes in hiring great people, setting good priorities, granting authority, removing distractions, and staying out of the way. The way Automattic works makes it no harder to detect slackers than if you were looking over their shoulders every minute of the day. But most Automattic employees come from a tradition of working remotely on open-source projects. They are self-sufficient and highly motivated, passionate about what they hope to achieve. Their way of working might not work for everybody, but it works for them.

Berkun believes that Automattic has answered many questions that the working world is afraid to ask. Results trump traditions, and the most dangerous tradition is that work is both serious and meaningless, as exemplified by _Dilbert_. A short definition of work is "something I'd rather not be doing." Automattic's management -- with its vision, mission, and long-term thinking -- may be atypical, but they have given work meaning. Automattic's workers have great freedom and take great pride in their work. And as Berkun's anecdotes show, they have a lot of fun.

This short and seemingly lightweight book actually contains a lot of meat, and I haven't covered all of it here. If you're interested in the future of work, you should read it.

This article appears in slightly different form in the Jan/Feb 2015 issue of IEEE Micro © 2015 IEEE.

Tuesday, July 1, 2014


This article appears in slightly different form in the Jul/Aug 2014 issue of IEEE Micro © 2014 IEEE.

This time I look at books that talk about how to do something. All of the somethings are related to words and writing.

Making Word 2010 Work for You: An Editor's Intro to the Tool of the Trade by Hilary Powers (Editorial Freelancers Association, NY, 2014, 140pp, ISBN 978-1-880407-35-6,, $20)

The 2007 edition of this book, without a version number in the title, focused on Word 2003 but attempted to address all varieties of Word in general use at the time. I reviewed the 2009 update in the July/Aug 2009 Micro Review. The book was highly acclaimed among its target audience, and they have been clamoring since then for an updated version.

Now in her  fourth career, Hilary Powers has been a freelance editor since 1994. She says she chose editing to enable her to emulate Nero Wolfe, that is, never to have to leave home on business. Before her first year as an editor was done, she had abandoned paper. She works only online, a fact that necessitated her mastery of Microsoft Word.

The 2009 edition was 80 pages, but the target audience of this larger volume is still anyone who edits for a living. If you use Word in any capacity, however, you will find useful information here. For example, editors must master Word's change-tracking facilities, but many non-editors use that feature too, and practically everybody finds it maddening. The display can be a garbled mess, hiding important information while revealing what you'd rather keep private. Sometimes the same change can appear clear or confusing, depending on how you make it. Powers knows all the tricks. In 11 pages, she tames the feature's wildest aspects and brings out its good points. She can't remove all of Word 2010's quirks, but she shows ways to work around the worst of them.

Macro programming is another powerful feature of Word that will quickly repay your learning how to use it well. Powers shows you where and how to use macros and provides free downloads of her own macros (here) to help you learn the details.

Powers says,
 "Macros and templates are at the absolute heart of making Word 2010 your own."
 This has been true for many versions of Word. More than 20 years ago, I had to maintain a 1000-page manual in Word for Windows 2.0 and publish both the printed Word version (before PDF came on the scene) and a text version to be read online (before HTML and browsers). Without writing 8 pages of macros, I could never have done it. Nowadays, things are easier, but well-designed macros can still save lots of time. Templates provide the modularity necessary to have different configurations and sets of macros for different kinds of jobs. Powers devotes 27 pages, nearly 20% of the book, to the chapter on macros and templates. The Microsoft documentation of these subjects is arcane, but Powers explains the features clearly.

One huge frustration for most experienced Word users is the Word 2010 ribbon. At first glance, it looks like a huge change in the user interface. It takes all those menu commands, whose locations you finally memorized, and rearranges them in ways that may make sense to at least one person at Microsoft. Unfortunately, the rearrangement makes no sense, especially at first view, to most of the rest of us. Some people refuse to use the ribbon and seek out aftermarket programs to emulate the old menus -- forgetting how much they hated that interface before they saw the new one. Let Powers lead you through the desert to the promised land. When she gets through showing you how to customize the interface, you'll never want to go back. And, to keep you sane while you're learning, she gives you a few tricks for finding what you know must be there -- because over the last 30 years, no feature of Word has ever gone away.

This book removes your excuse for avoiding a range of troublesome tasks -- from mastering macros to simply selecting configuration options. If you're like most Word users, you could do a lot of what this book recommends without reading it, though you'd have to figure out a lot of things that Powers has already figured out for you. And once you read about the huge increases in efficiency that she achieves through those techniques, you won't be able to resist the urge to tinker.

I have known Hilary Powers for many years, so I can testify that she writes the way she speaks. Her style is clear and colorful, never boring. The topics she covers are practical and directly useful. If you use Word, you should read this book.

DITA Best Practices: A Roadmap for Writing, Editing, and Architecting in DITA by Laura Bellamy et al (IBM Press/Pearson, Upper Saddle River NJ, 2012, 296pp, ISBN 978-0-13-248052-9,, $42.99)

In the Mar/Apr 2012 Micro Review, I reviewed the IBM Style Guide. This book is a companion to that style guide. It focuses on the Darwin Information Typing Architecture (DITA), an XML-based system for authoring and publishing technical information. Originally an IBM project, DITA is now an open-source toolkit, managed by the independent global consortium Organization for Advancement of Structured Information Standards (OASIS).

DITA provides a technical infrastructure for topic-based writing and publishing. Following a model that evolved from the online-help systems of the 1990s, DITA starts with the idea that most technical documentation can be broken into chunks, and that each chunk falls into one of the following basic categories: procedures, concepts, and reference. The Darwin part of the name comes from DITA's use of inheritance to allow different projects to extend and specialize the basic categories. Unlike DocBook, another popular XML schema for technical publishing, DITA has an associated set of tools for building an automated process that supports publishing multiple documents to multiple output media from a single database of content.

This book provides a clear treatment of metadata and DITA maps. Metadata -- data about the content -- provides one key to achieving DITA's benefits. Properly designing and using metadata makes your content easy for you to manage and for your users to find, and it helps you provide different output to different audiences. While much of the DITA metadata stays with the content, a significant portion of it can reside in DITA maps -- structures that define documents in terms of the topics that comprise them and the relationships among those topics. Becoming thoroughly familiar with DITA maps is an important step on the road to feeling comfortable with DITA.

Understanding DITA, especially its architectural aspects, entails digging into the details. DITA is conceptually simple but the details are hard for most people to wrap their minds around. This book helps you understand the details and, by laying out best practices, makes a lot of choices for you, simplifying your task of getting up to speed. The authors are aware of the learning difficulties DITA presents for many users. For example, they begin the chapter on metadata by saying, "If your writing team is just learning about DITA elements, don't scare them by using fancy words such as metadata at team meetings. Otherwise the guy who brings the donuts might not come anymore."

Like the IBM Style Guide, this book is sure to become a standard. If you want to work in DITA, you need this book. If you're not sure you want to work in DITA -- it's overkill for many applications, though new tools and techniques keep lowering the bar -- the detailed information in this book will give you a basis for deciding.  

Word Up!: How to Write Powerful Sentences and Paragraphs (And Everything You Build From Them) by Marcia Riefer Johnston (Northwest Brainstorms, Portland OR, 2014, 268pp, ISBN 978-0-9858203-0-5,, $19.99)

Marcia Riefer Johnston is a popular blogger about writing. This book is essentially a collection of blog posts, arranged to support the theme that the subtitle of the book suggests. Yet as pedestrian as that sounds, the book builds momentum and ends powerfully. Johnston succeeds where most blog rehashes fall flat, because she digs new and interesting material out of veins that have been mined again and again. Besides, she may be the only person I know of who quotes from S. I. Hayakawa's Choose the Right Word.

Johnston starts by taking a position in the pointless but heated debate over whether language mavens should prescribe rules for others to follow or just describe the way others speak. Her position, sensibly, is squarely on the fence. On the prescriptivist side, she says, alluding to a formulation by Bryan Garner, "After a quarter-century of professional writing, I still yearn for linguistic guidance, and I still struggle with editorial predicaments." After my half century of professional writing, I can tell her that the next quarter century probably won't be any easier. There will always be plenty of work for the prescriptivists. On the other hand, she also bows to the descriptivists. In her essay "To Each Their Own," she pragmatically acknowledges a place for a singular "they," though she tries to avoid it. On the other hand, her prescriptivist side inveighs against computer-generated abominations like "Mary updated their profile."

Johnston is more engaging when, rather than taking a position on some tired old question, she shines fresh light on an unexpected topic. By examining the question rather than answering it, she takes us somewhere new and interesting. Should you end a sentence with a preposition? Who cares?! The interesting question is, "What is a preposition anyway?" Is "from" a preposition? Are prepositions really parts of speech, or should they be demoted, like Pluto, to something less grand?

Book publishers nowadays use Google as an excuse to skimp on indexes, but Johnston knows the value of a good hand-crafted index. She created her own and proudly introduces it by calling attention to entries she's especially proud of. Indexing mavens have long maintained that indexing helps to identify structural problems in a book. However, professional indexers usually receive books when the content is frozen, so nobody corrects the structural problems. Johnston points out ways that she was able to strengthen her book while indexing it.

In the final chapter of the book, Johnston tries an exercise that many writers would find terrifying. Emulating William Zinsser, she presents a piece of her own writing about a meaningful personal experience, then goes through it step by step, explaining the options she considered and the decisions she made to arrive at the final version. I found it enlightening.

When it comes to details, I disagree with some of the things Johnston says, but I like her basic message of loving the language and using it precisely. Not everybody wants to work at learning to write well, but if you do, you'll enjoy reading this book, and you may decide to follow Johnston's blog.  

Be the Captain of Your Career: A New Approach to Career Planning and Advancement by Jack Molisani (Precision Wordage Press, Pasadena CA, 2014, 148pp, ISBN 978-0-9627090-2-9,, $15.95)

Jack Molisani is founder and principal of Prospring Staffing and executive director of the Lavacon conference. He is a Fellow of the Society for Technical Communication (STC) and a frequent speaker on career-related issues. I have heard him speak many times, and this thin book perfectly captures his basic message: face the truth and deal with it, but no matter how bad things get, stay optimistic and keep working toward your goals. Jack suffered a big setback in the 2008 downturn, and he used his own methods to come back strong. This is an inspiring book, but it is filled with simple, practical advice. I recommend it.