- Why a top-down approach is better than a bottom-up one
- Looking at user experience from an architectural perspective
- Why UXDD is beneficial to nearly everybody
- Talk is cheap. Show me the code.
- —Linus Torvalds
It’s common to hear developers complain that customers change their mind too often. The narrative goes something like this: “We discussed requirements for weeks and signed off on specifications. Then we started coding, and when we delivered the first sprint two weeks later, we found out that the program was only vaguely close to what they really wanted.” This experience is summarized in a popular cartoon where the waiter gets a complaint about the coffee he just served. “We use top-quality coffee and the best machine available. What’s wrong with the coffee, sir?” And the customer’s answer is kind of shocking: “I actually want some tea.”
The process of elicitation has always been difficult, and the Ubiquitous Language pattern I discussed in Chapter 1, “Conducting a thorough domain analysis,” as the foundation of Domain-Driven Design (DDD) addresses the topic of communication between the various stakeholders involved in a software project. However, agreeing on abstract requirements and specifications is often not enough. When customers actually see the artifact, they might not like it. Despite all the talks you had with the customer, they might have formed an idea that is actually different from yours.
This is to say that to reduce the costs of software development, and further reduce the number of iterations to figure out what exactly users want, an additional level of agreement must be found beyond Ubiquitous Language. Creating a common language shared by all stakeholders and widely used in all spoken and written communications is of immense help to ensure that each word spoken is understood correctly and, subsequently, software specifications are correct.
You know, however, that talk is cheap and to give customers a realistic perspective of what you’re going to do you should show some code. But code also is expensive to produce, and nobody likes the idea of writing code that might be thrown away if some assumptions made, and not clearly resolved by specifications, turn out to be wrong.
In this chapter, I’ll present UX-Driven Design (UXDD). UX stands for user experience, and UXDD is a top-down approach to implementing whatever supporting architecture you selected for the system. UXDD differs from most commonly used approaches in that it emphasizes the presentation layer and the actual screens the user will end up working with. The main trait of UXDD is that, before you get into coding mode, you have customers sign off on wireframes and storyboards for each task offered through the presentation.
Why a top-down approach is better than a bottom-up one
In the course of history, many great ideas have been first sketched out on the paper napkins of some cafeterias. This is because hand-drawing is still an excellent way to jot down ideas, whether it’s ideas about the top-level architecture of a system or the user interface the actors will use for their interactions. More often than not, customers have a hard time explaining what they want, but on the other hand, they are not expected to explain in full detail the experience they want. It’s the development team that should grab the key points and learn from real processes to mirror them in software.
If you agree with this vision of the software world, you also agree that the role of presentation is way more important than it has been in past decades. The term top-down is nothing new in software, and it is a term often used in the context of code. Professor Niklaus Wirth—the inventor of Pascal—was among the first to coin and use the term extensively.
The point I want to make here, though, is architectural. Architecturally speaking, I dare say that in past decades we never applied any top-down design approach. Everything we did was done to build the system from the bottom up. It’s about time we consider a different approach to reduce development costs.
Foundation of the bottom-up approach
As I see things, we keep on designing and building software as we have done for at least the past 15 years. However, a lot has changed in the same time, both in terms of client and server software and, more than everything else, in terms of the actual users’ expectations.
Assets of the 1990s
The onion diagram in Figure 3-1 shows the key architectural assets of the 1990s. Most systems were designed in a way that took the most advantage of the facts depicted in the figure.
FIGURE 3-1 Assets of the software architecture in the 1990s
In the 1990s, the IT department in most companies was built around a huge powerful server that cost a lot of money and had to be used as much as possible. The server ran all the business logic and took care of all persistence tasks. On top of the server, you typically had a few, and far slower, personal computers acting as dumb terminals with just a nice Microsoft Visual Basic user interface. More than everything else, though, in the 1990s there was a mass of users passively accepting any UI constraints imposed on them by software engineers.
The presentation layer was just disregarded, and all the design efforts focused on getting the most out of that powerful server the company invested all that money on.
What’s different today?
Today we live and write software in a totally different world. Take a look at the same onion diagram for modern times in Figure 3-2.
FIGURE 3-2 Assets of the software architecture today
First and foremost, today we have an amazing number of fancy technologies and myriad client devices. This poses new challenges for software architects and also results in users actively dictating user-interface features instead of passively accepting whatever is offered. Today, and even more so in the future, a poor user experience might become a serious issue and undermine the reputation of the software. What you see happening for mobile apps—with many downloaded and soon dismissed—might become the norm for all applications.
What DDD has changed
DDD has been the first serious attempt to change things and adapt mainstream software architecture to changing times. The mainstream software architecture before DDD was essentially built from the ground up using a solid relational model as the foundation and placing business-logic components on top of that. Business-logic components were mostly vertical components that organized behavior on a per-table basis. Data-transfer object (DTO) or ad hoc data structures like recordsets were used to move data across layers and tiers and up to the presentation layer.
DDD changed a few things, but mostly it contributed to rethinking the overall architecture layout. (See Figure 3-3.)
FIGURE 3-3 How DDD changed the core software architecture
DDD led to splitting the monolithic business logic into two smaller and logically distinct pieces—application logic and domain logic. Application logic is the part of the business logic that implements the workflows behind use-cases. The domain logic, conversely, is the part of the business logic that implements business rules that don’t vary according to use-cases. In leading this change of approach, DDD introduced the notion of the domain layer, which is the segment of the architecture where you provide a software model for the business domain. Such a software model doesn’t have to be an object-oriented model. It should be whatever you reckon it to be—including an anemic model, a functional model, or even an event-based model.
Ultimately, what years of DDD really changed in software architecture is the perception that the data model is the foundation on which to build software. With DDD, this vision started shifting toward using a domain model to serve as the foundation of software. Today the trend is shifting even more toward using events as the data source and event-based data stores on top of canonical data stores such as relational or document NoSQL data stores.
Planning with a top-down approach
In spite of all the changes we have faced in recent years, I believe we keep on designing code as we used to do back in the 1990s. We develop a good understanding of the system and build a data model that probably works. On top of that, we then build what we consider a good enough user interface. Then we go to the customer and find out we’ve got something wrong. The more we iterate, the more the software project ends up costing.
To improve things, we have to recognize that what users perceive to be the system is just the user interface they work with. When we can ensure that the UI and subsequent UX is really close to what users expect, the chances of redoing things because we got it wrong are significantly reduced.
To get there, though, we must start planning the system in a top-down way, putting the UX and the presentation at the top of our concerns.
Avoiding a design with square pegs and round holes
When it comes to software, most of users’ expectations are met or not met in the screens they use to do the actual job. If you have a mass of passive users, you can afford to build the foundation of the system from the bottom. Whatever model you end up with works for users who have a passive character, but it doesn’t work as well if users expect some specific UI/UX to work with. (See Figure 3-4.)
FIGURE 3-4 Role of passive and active users in overall architecture design
If users are willing to accept any UI you offer them, building a system from the bottom up ends up working nicely enough. However, if users expect a specific UI and are not very forgiving on that point, the endpoints developed out of the model built from the bottom-up might not fit with the connection points developed out of the presentation layer approach. This is precisely the conflict that requires a lot of iterative work to be fixed and produces the highest costs and greatest amount of annoyance and misunderstandings. It all looks like trying to fit a square peg into a round hole.
Going the other way around, from top to bottom, instead, ensures that the firm points are those that users want to have. Next, whatever back end you build to support those firm UX points won’t diminish the users’ level of satisfaction. Put another way, the entire back end of the system becomes a huge black box underneath the agreed-upon presentation screens and forms.
Establishing two architect roles
UXD pushes a top-down design of software architecture. In this scenario, you might find it useful to employ two distinct architect roles on a project. By saying architect role, I’m not suggesting you have two distinct professionals; instead, I’m suggesting you have two distinct sets of skills, which possibly could be found in the same individual. One role is the classic software architect role. The other is the UX architect role.
The software architect conducts interviews to collect requirements and business information, with the declared purpose of building the domain layer of the system. The UX architect, on the other hand, conducts interviews to collect usability requirements, with the declared purpose of building the ideal user experience and the ideal presentation layer.
Understanding the responsibilities of a UX architect
The pillars of a good user experience are summarized in the following list. Note that the order is not coincidental:
- Organization of the information
- Interaction model
- Review of the actual usability
For a UX architect, the first point to look at is the organization of the information presented to the users, including identifying the personas—namely, the types of users working on the application. Next comes the way in which users are allowed to interact with the displayed information and the graphical tools you provide for that to happen.
All this work is nothing without the last point—usability reviews. A UX expert can interview customers a million times—and should do that, actually—but that will bring about only an understanding of the customers’ basic needs. This leads to some sketches that can be discussed and tweaked. A high rate of UX satisfaction is achieved only when the interface and interaction form a smooth mechanism so that neither has usability bottlenecks nor introduces roadblocks in the process.
For a UX expert, talking to users is fundamental, but it’s not nearly as important as validating any user interface live from the field, observing users in action and, if possible, even filming them in action.