Interview with James Coplien

It is a great honour to interview James Coplien, who is involved in founding the Software Pattern Movement. He is author of Advanced C++ Programming Styles and Idioms and  co-authored books like Organisational Patterns of Agile Software Development and Lean Software Architecure for Agile Software Development.

James Coplein’s work on organisational patterns has inspired to include daily stand-ups in Scrum and inspired extreme programming as well. Let us understand his thoughts on Speciality of Lean/Agile Architecture;

Q.1 What is the most special thing about Lean/Agile Architecture?

James: I feel its strong point is that it is rooted in a very few simple principles of how the world works and in some basics of human psychology. One of the greatest compliments I ever received was to be interviewed by Joe Dager, who read the book and liked it. Mind you, he’s a lean guy from the manufacturing world that has little or nothing to do with programming, and he found that the book rang true to his world and the ideas they value. I think the book brings the sector simple but incredibly powerful ideas to software.

In terms of how the world works, we know that there are time-honoured forms in every domain and that it pays to understand them intimately. So, unlike the practice of DDD, we invest in understanding the domain as worthy work in its own right. We build the architecture around that understanding. It is not a collection of guesses, but is grounded in history, breadth, and experience. We can express the forms of these domains in code, as in abstract base classes.

At this point the “what-the-system-is” part of the architecture is in some sense complete, but compressed — compression is a key concept in Lean Architecture. If you and I share the abstract base class declaration for class ComplexNumber, and tap into our shared cultural understanding of what ComplexNumbers are, then the abstract base class is adequate as an architectural specification. It is not abstract because we can concretely agree exactly what answer we will get if we invoke ComplexNumber.+ on a specific object with another specific object as a parameter. The abstract base classes compile and link. As compressed descriptions they are general (to cover a broad market) and robust (they are likely to survive cosmetic changes in market needs).

A second insight into how the world works is that it’s waste to guess in advance, in detail, how these parts will do their work. That requires the contextualization of a concrete use case from real end users. (For example, for ComplexNumber — will we use Cartesian or polar computation?) We wait to fill in the method code and the implementation of roles and classes until we have a concrete use case. The Product Owner can use that use case to develop system tests (yes, POs write tests). That was Dan North’s original vision of BDD before the tools people got ahold of it. So we deliver the system incrementally. We don’t deliver classes: we deliver use cases.

As to human psychology, the “what-the-system-does” part of Lean Architecture (the agile part, reflected mainly in DCI — see below) is rooted in the developmental psychology principles of Piaget. That’s where object orientation started back with Alan Kay and Seymour Pappert. Over these past 35 years the OO community has devolved its focus from supporting human mental processes to a lot of technical rules and practices and so-called “computer science” and “software engineering.”

In Lean Architecture we don’t structure the system in terms of the computer science formalisms of coupling and cohesion: modern CS people reduce coupling at every chance, and they’re just as happy to throw away essential coupling as accidental coupling. Again, the obvious thing is to keep the system as simple as possible, but not more so. That means keeping essential coupling — for example, relationships between objects — but finding strategies that can slice the system cleanly along lines that are made clear by thinking about time instead of just data. Most modern computer science has lost the notion that time has form. Much of Lean Architecture is just about seeing clearly and discarding all the bad practices that academia, methods, and consultants have loaded on us, and restoring a few central principles that your grandmother knew (if she was a programmer).

Q.2  How can we employ Use Cases in a lightweight, incremental and Agile way?

James: Use cases got a bad rap in the beginning because software people used them in such a rigorous way. To people from a waterfall heritage they became just another way to encode up-front structured requirements, and tended to be used to capture and structure all requirements up front. Of course that doesn’t work — and people blamed use cases when things fell apart. However, in fact, use cases were designed to be used iteratively and incrementally to support complex development: that’s what the refinement relationships between scenarios are all about. At their heart they are a lightweight, incremental approach: ignorance has led people to use them otherwise. That they fell into disrepute owes to a large and collective misunderstanding about how to use them. My wife and business partner (and co-author of the Lean Architecture book) Gertrud Bjørnvig has been working in requirements most of her career, and we have worked together to help people adopt what is described here.

So an ignorant agile world took another tack and proposed user stories instead. The idea sounded good: going to the user and getting a story “from the horse’s mouth.” And, most importantly, a user story isn’t about writing but just about “a promise for a future conversation between an end user and a developer.” But the concept was quickly lost. If I look in Extreme Programming Installed (2001) where user stories are first elaborated in publication, we find this as an example of a user story:

For each account, compute the balance by adding up all the deposits and subtracting all the deductions.

Where is the user, and where is the story? And this one is lacking the final clause, which is the user’s motivation. In fact, if you look at most contemporary user stories, they’re just pseudo-code! I thought we had gotten beyond pseudo-code 30 years ago. Like many things in agile, they’re the nerds’ revenge. Even if the story comes later, here we’re deep inside the code for audit trails — about which end users don’t care at all.

Alistair Cockburn rescued use cases from the mythology that has come to surround them in his timeless book Writing Effective Use Cases. And they work. As one example, consider a large project at Systematic here in Denmark to redo the federal gambling tax system. They came in at 40% under budget and substantially ahead of schedule (ComputerWorld Denmark, 29 February, 2012). They explicitly credited two things: Scrum, of course, but even more so use cases. We repeatedly see this in our clients, and can relate many more stories like this. Those using user stories for complex development tend to drown in a sea of unstructured cards and often find themselves doing rework (politely renamed refactoring) to eventually meet the market need. Jeff Sutherland has always advocated use cases in Scrum but more recently resorts to using the phrase “user stories,” maybe to avoid alienating his audience too quickly.

We won’t throw the baby out with the bathwater, and we’ll be smarter than to depend on just a single technique like user stories. Scrum requires that the Product Owner instill an enabling specification in the minds of the developers (see Jeff Sutherland, ENABLING SPECIFICATION). The journey of understanding a requirement may indeed start with a user story written on a napkin over lunch. To give it more texture and context, it’s a good idea to create imaginary users called personas whom we envision using the feature. We give those people names, addresses, ages, and lifestyles to get our minds rooted in reality. And we don’t refer to them as “the user” but “as the account holder,” “as the approval manager,” “as the eye doctor patient” — which we can borrow from the first clause of a well-written user story. Then we make a little story (a real story, finally), called a user narrative, around the feature. (In systems that are central to the functioning of a very large population, or particularly in complex products, we may use user profiling instead of personas. Personas and user narratives are a cheap alternative to user profiles, and are suitable to relatively simple products.) After understanding several narratives from one of more user communities we can structure them into a use case.

A use case organises multiple scenarios into a set of flows, and wraps them in important business considerations such as preconditions and postconditions, together with other informal considerations. Each use case tells what other use cases it depends on — and of course, the use case makes it clear which of its scenarios depend on the others. One very important part of a use case is the user motivation or goal: developers are more likely to build something that meets the customer need (as opposed to just doing what the requirements say to do) if they understand that. That, too, we can glean from the user story — the last clause in a well-written user story.

If you look at user stories these days, Mike Cohn and others have added dependency cards, test case cards, and a bunch of other accoutrements that ironically strive to bring user stories back closer to use cases. So we’ve come full circle. Story mapping is another valiant attempt but it tries to present three dimensions of concern in two dimensions: dependency information, and therefore time ordering of delivery, gets lost. But this is not to fault Jeff Patton, who pioneered them: I think he understands use cases but, like me, has become blue in the face to try to get people to appreciate their benefits. We can pacify them with anything that uses cards on a wall.

Yet, for Scrum, user story strategies are fundamentally flawed (except for the dialogue and the motivation clause) because a user story is just about properties of the deliverable, and does not specify the deliverable itself. The backlog in Scrum is a backlog of product increments. It’s called a Product Backlog, not a requirements backlog. A user story is one small perspective on some piece of a requirement. Maybe three user stories combine to characterise some product increment. I don’t put the three user stories on the backlog and estimate each independently: the user story is just a dispensable tool that gets me to the point of making an enabling specification of the product increment. It’s that spec that goes on the backlog. User stories should never appear on a Scrum product backlog unless they are trivial.

Use cases aren’t so much about writing requirements as structuring requirements. Requirements can be complex, and they are just as much in need of structuring as your code is. The focus is still on that conversation with the end user. You can keep use cases lightweight by avoiding commercial tools with their administration, learning curves and bells and whistles. Keep the amount of writing minimal, and try to avoid making things “pretty.” Don’t guess about the future; focus on what you’ll be delivering in the next 6 weeks. Write them knowing you will throw them away after delivering. Tailor the form mercilessly to meet your needs — there is no “right” format. And socialise them with your users, combining them with the other techniques and approaches I mentioned above that complement use cases’ strengths and weaknesses. Beyond those, reduce your ideas to prototypes, storyboards, anything that engages your end user (end user — who may not be your customer).

Q3.  How DCI (Data, Context and Interaction)  succeeds where object–oriented programming languages alone have failed to integrate software design?

James: I started as a BASIC programmer back in about 1970. (Well, I did some machine coding before that, but we won’t go there.) Back then one could read one’s code, hand-trace it, and understand it. The same was true for FORTRAN, which I learned in 1973. My ability to do that was taken away with the advent of object-oriented programming. I can understand only one method at a time. If that method calls a method on another object (or even on itself!) I must stop manually tracing the execution. I can’t tell where the program counter will end up. That’s by design, and it goes by the name nerds love to use called polymorphism.

So Java, C++ and Objective-C programs were good when all the processing for a given business feature stayed within one object. They were very good for simple operations in the code of graphical editing programs. But a good deal of complex business processing comes from use cases, and that means understanding a network of cooperating objects, working together to solve some problem. That was the original vision of objects.

The problem is that today we code classes, and we can’t understand object relationships from the classes. It’s class-oriented programming. (Good JavaScript programmers will recognise that they don’t necessarily have this problem, because they do real object-oriented programming.) There is no construct in a Java or C++ program from which we can understand the business flow! So we can understand (and modify) our code only locally at the resolution of a method or class. We can’t understand our programs at the level of a use case. We can only guess. And, as Stevie Wonder said, “You believe in things you don’t understand, you may suffer.” If we can’t understand them they are very unlikely to be right.

And that indeed was the historical experience. So, guess what — along with the rise in popularity in objects came a rise in popularity of testing. Humans are pretty good about generalizing from code, and it’s often easy to argue the correctness of code for a broad set of cases from inspection alone. (If not, the code should probably be re-written.) But to assess correctness through testing precludes treating a “broad set” of cases: we test one scenario and one precise collection of bits at a time. Testing is expensive, and it doesn’t conclude anything about correctness: it can show only that code doesn’t work, not that it does. It’s a sampling technique — and it tests a very small sample relative to the full range and domain of interest.

What’s more, the rise in personal computing made development more and more of an individual task than a team task (and we still have only “individuals and interactions” with no real recognition of teamwork) so programmers gravitated to unit testing. It was a great breakthrough to move from one programmer to two, but most of the benefit came from thinking and observing rather than testing — and it was still usually limited to one class at a time. It was institutionalised to the degree even that the unit test was done first, before the unit was coded — a practice called TDD, which thankfully is on its way out after two decades of waste.

DCI packages use case as self-contained, readable code modules called Contexts. We can again understand what our code does! The use case logic lives in the Contexts’ Roles that describe the interaction not of classes, but of the objects playing the roles. We still have polymorphism in that different kinds (classes) of objects can play each a Role, as long as the object meets the requirement (contract) of the Role it is fulfilling. But the polymorphism is in play only for trivial instance methods on classes: they are primitives, return immediately, and are trivial to reason about. So the Context becomes the locus of understanding a use case. First, there is no uncertainty about the procession of Role methods, since their invocation is statically bound and, second, the programmer does not have to context switch (e.g., between classes) to understand the execution of a use case. These ideas have emerged from more than a decade of focused research between Trygve Reenskaug (who pioneered the idea) and myself (who, in his words, did “the hard work needed to advance DCI from early theory to practice). DCI promises that its programs are easier to reason about than traditional OO programs, so they are more likely to be correct.

We have recently published research that demonstrates that DCI delivers as promised (An Empirical Study on Code Comprehension: Data Context Interaction Compared to Classical Object Oriented, by Héctor Valdecantos et al., ICPC 2017). Subjects were more often able to analyse correctly what DCI code does, or to understand it well enough to correctly update it to some new requirement, than they were for Java code. The results are stunningly significant. People can play with a Java-like DCI language called trygve (GitHub: jcoplien/trygve) if they want to learn DCI, but one can use DCI in many modern programming languages. It’s impossible only in one such language, and that’s Java.

Q4. One tip to write software that can directly be verified against behavioral requirements.

James: This indeed is the holy grail of software, and meeting it perfectly has always been and will continue to be elusive. The question seeks a simple answer to what is a complex problem. We can only improve our ability to verify more properties; we will never be able to verify any implementation’s ability to meet the requirements of anything other than another machine. Testing explores only a minute collection of samples that together don’t prove anything.

So how do we make things better? Again, we return to human behaviour and basic properties of systems. Code that you can’t understand is unlikely to be correct. Software engineering has long taught us about good coding style, short methods, and good naming. We did some back-of-the-envelope research in Bell Labs that suggests that there is strong correlation between good indentation style and low bug density. A good team has good discipline in these areas.

But these tricks go only so far in an object-oriented world where one can understand only one use case step at a time. New design paradigms like DCI allow us to again read our code with a business eye, and to understand the business function that the code delivers. We can understand our code again in business terms, and that makes it more likely that the code meets the requirements. It also tends to structure it around what the end user cares about, instead of what the programmer may otherwise care about — which has the side-effect of lower cost in program evolution.

So my one tip broadly is: Write readable code. Treat your code as literature. Name variables with the same care as you name a first-born child. Take Clean Code to heart. And “readability” should imply comprehension at the level of the business value generated by the code. More narrowly: learn about DCI.

Q5. Luke Hohmann says “This carefully researched, artfully described, and extraordinarily useful handbook of deep wisdom on creating teams that generate terrific software should be on every software development manager’s bookshelf.” about your book ‘Organizational Patterns of Agile Software Development’. What was your motivation behind the research while writing this book?

James: People. I love working with people — people who are focused, passionate about what they are doing, and eager to learn. It started when a colleague, Moody Ahmad, turned me on to the perspective that the interesting problems are on the human side rather than the technical side. My career satisfaction has rarely been about me alone — that’s a hollow academic posture — but about working with and helping others see better ways of doing things. I seem to be good at working with people and jointly finding new perspectives, while effectively mixing the human and technical components of work. The large levers there are people and process — “process” not in the ISO sense, but in the sense of celebrating the rituals of harmonised human endeavour. And that itself is a new perspective on organizational maturity and “process.”

In the early 1990s in Bell Labs we were trying to understand what makes work groups effective. We were in the midst of Theory Z times and management believed the power was in the process, as the Japanese had told us. And management’s weapon to implement process was ISO 9000. I did a bit of research and found no correlation between process compliance (in the Western sense) and, well, anything. This led to a secondary question: what was it that made a difference?

We followed a hunch that it was all about relationships between people, and launched some research using social network theory — which I invented in 1993, only to discover that Moreno would steal my idea 20 years before I was born. We just gathered all the data we could about how organisations really worked, and sought out recurring themes — dozens of organisations, all over the world. We made graphical models of the social networks and did a lot of data reduction on them.

A bunch of us got together and launched the pattern discipline at about the same time, so I started writing up the structures we found in the networks of powerful organisations as organisational forms to which one should aspire: “Engage Quality Assurance,” “Work Flows Inward,” and “Distribute Work Evenly” are intuitive examples. We found that these patterns proved powerful at capturing organisational practices that worked where process descriptions failed. Patterns also captured why these forms worked, and were inspirational enough to encourage teams to try them out. Neil Harrison and Brendan Cain joined me and it turned into a research program that lasted a decade and culminated in the book.

All of our work was empirical. One famous result came from my analysis of the Borland Quattro Pro for Windows project which, even to date, is the most awesome software development project ever studied. One of their noteworthy practices was to assemble the architects every morning to review yesterday’s progress, to evaluate blockers, and to plan work for the next 24 hours. I wrote this up as a pattern and, as it was circulating on the web, Jeff Sutherland saw it and decided to include it in his framework. The idea survives (and thrives) today as The Daily Scrum.

These days we’re working on the sequel: A Scrum Book. I lead a group of about 20 direct contributors (including Jeff Sutherland again, but also Mike Beedle, Gabrielle Benefield, and many other notables), drawing on the work of thousands of teams, and we’ve been at it about seven years. We’re getting close. We have about 100 patterns. You can read the patterns at:

Q6. Have you ever been to India? What is the one thing you like most about how Agile Processes are implemented in India?

James: I have indeed been to India several times. Some of the architecture is stunning and the people are great. I have a long-time interest in Vastushashtra, which resonates strongly with  pattern foundations.

In terms of processes, people are people everywhere. They want to do their best. But they are limited by the mores of their culture and workplaces. India is a strongly hierarchical culture and that cripples the dialog that fuels agile development. A ScrumMaster is viewed as a titled position and, in that culture, the right thing for a developer to do is to please him or her when challenged about the progress of a task. So the team often misrepresents the actual state of the product, and that makes transparency difficult.

But, again, it’s the culture and not the individuals. I have a friend, Rune Funch Søltoft, who was working on a complex financial product here in Denmark. One of the Scrum Teams (mainly Swedes) had deemed a set of features impossible to implement and left them on the backlog for two years. The organisation brought in a new team, all from India (from Tieto in Pune), that have never worked together previously, with members of varying levels of experience. They initially exhibited all the Indian dysfunctions that fit Western stereotypes: they tended to be “yes men;” they often worked harder instead of smarter, and they were very failure-averse. Rune worked with them as their ScrumMaster to make it safe for them to fail, to get them to own that feeling, and to get them into a work style that built on learning from failure instead of avoiding it. In two weeks, that team delivered what the Swedish team had been unable to deliver for the past two years. It was a real victory for the Indian team, that became the most productive team at the client.

Q.7 Any message you have for our audience?

James: Always be ready to give a reason for what you do — a reason that comes from your heart and soul rather than some book. Most interviews I do explore that aspect of my outlook and approach rather than, as this one has, the technical and professional stuff. The technical stuff, you can get from a book. That’s the easy part. The hard part is the answer to the question: What, and who, are you becoming? And: What is the path from here to there? I do not stand so much on my accomplishments as on possibility, and it’s my mission to lead others to do the same. Laurels just fuel the fire that keeps the house warm. These questions are key to a Buddhist worldview — which is less foreign to you there in India than to me in Denmark. Reach into your heritage and find keys to doing great things in the world of work.

Agile has become a religion, and is very bad in this regard. Object-orientation has devolved to the same place, and there are many other religions ranging from DDD to Six Sigma and microservices. Most of these are fads and, like TDD and on-site customer, research and experience eventually discount them. Some even curry their belief in practices such as kanban, whose inventor says that it is only a stopgap measure and a sign of immaturity that should be completely eliminated from one’s systems. The Toyota Production System (on which Scrum is based — not Lean!) tends to come with good arguments as to why its approaches work, so it may be a good starting point. But you need to add a free spirit of innovation. A hard message for Indian managers is: don’t be afraid to fail. A failure-averse culture will never have the courage to realise world-changing innovations. As Martin Luther said: Sin boldly — but believe more boldly still.



James Coplien is the father of Organizational Patterns, is one of the founders of the Software Pattern discipline, a pioneer in practical object-oriented design in the early 1990s and is a widely consulted authority, author, and trainer in the areas of software design and organizational improvements.

He has authored books like books Pattern Languages of Program Design (with Douglas C. Schmidt), Pattern Languages of Program Design, Volume 2 (with John M. Vlissides and Norman L. Kerth), and Advanced C++ Programming Styles and Idioms.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.