How to Evaluate Application Architects

About 3 years into my career, I was promoted to the coveted “Application Architect” at a manufacturing software subsidiary of a Fortune 500. At the time I was a force of youth and passion – but I hadn’t developed the skills necessary to really shine in the position. I was successful in many ways, but I also lost many opportunities that are crystal clear looking back at that early period of my career.

I served as an Architect in several positions after that, but it wasn’t until I started working for Catapult as a consultant that I learned the soft skills necessary to really be effective. In my 9 years of consulting I’ve had the benefit of working alongside architects at clients that were both great and in need of some help. Though the industry still has no standard definition for what an Architect should do (don’t believe what those certifications try to tell you), there are a few considerations that lend themselves to great outcomes in the position.

Catching Process Breakdowns

Any architect who has worked with a team to deliver a product or service will have stories to tell about what makes a great process for delivering software, and what doesn’t. When talking to a potential Architect, I look for how they used technology to prevent low quality deliverables from getting further down the release pipeline.

We’ve probably all worked on a project where the team established a process for testing, code reviews, and managing configuration changes in each of the environments the product gets deployed to before customer eyes see it. It all sounded dandy until someone realized team members weren’t following the process – but that was too late. How does the individual use technology to cause deliverables to fail fast if they aren’t acceptable?

A Realistic Estimation Mindset

One of the hardest things about being a developer is estimating. The more experience we have with a technology, and the better we understand what constitutes acceptable deliverables, the more accurate the estimate. Most projects allow some user stories to be given to the team with too much uncertainty in the technology or holes in acceptance criteria – how does the individual deal with this? A common response to this situation is to get clarification from the product owner – but what happens when there is still significant risk? I like to hear how individuals communicate timeboxes for research or prototyping of an aspect of the unfamiliar technology and how they would determine when too much time is spent on it.

Sharing Pattern Decisions

We’ve probably all worked with a technical “leader” of some sort who has to be the person to set the pattern that will be used for error handling, validation, messages on a service bus, or a host of other common needs in modern applications. Architects are called to lead – otherwise you end up with the “rockstar” personality who can’t scale and becomes a bottleneck, poorly utilizing the whole team. While a good Architect will show the team patterns during decisions and help them think through problems they might see with them, a great Architect inspires team members to propose patterns themselves and then helps them champion these, giving full credit to the team members.

Mastery of CRUD

The vast majority of modern applications are heavily backed by data; and an application with a poorly designed data model can exhibit poor performance, high cost to introduce new features, and excessive business logic. Great Architects can understand requirements of products as they emerge and help the team to make sound decisions about how to model relationships between data and modify it. They also understand REST not as a technology, but as a principal. I run across a surprising number of Architects that have not developed this skill.

Making Time for the Team

An experienced and effective Architect will not allow themselves to be assigned too much work to support their team. They know that quality deliverables require support and that this has a cost – and will argue it to the folks approving budget with appropriate vigor. As a leader of one or more teams, an Architect cannot be effective if they are annoyed by questions from more junior team members. Rather, some of the best Architects I’ve worked with love to teach and know that by building up the value of their subordinates they can improve the velocity of the team more than focusing on themselves.

Expert Communicator and Facilitator

Almost any Architect can create a PowerPoint presentation or write a blog post, but is what they convey at the appropriate level of detail for their audience? Ask an Architect questions about how they explained an aspect of their implementation of a product to customers and other non-technical personnel. How did they get consensus when stakeholders disagreed?

Evaluate their face-to-face communication style – many architects have a hard time listening because they are already trying to design what’s coming out in the conversation. A great Architect takes notes during conversations and strikes a balance between too much detail and not enough. Evaluating this during an interview or conversation is difficult, but most Architects who know how to do this will mention it when the topic is approached.

Patterns as Necessary

The ivory castles Architects build can become wonders of intellectual masturbation at times and lead the rest of the team to ruin with abuse of the single responsibility principal to an excessive level. I look for Architects who know how to select the simplest possible way to meet the requirements while introducing new patterns only when absolutely necessary. I’ve run across one too many projects where the Architect is the only one following a pattern they set. When I talk to other team members, they just don’t understand the pattern, and don’t have time to follow it. If an Architect cannot write a simple Wiki topic and communicate the value of an aspect of the architecture, chances are it may be unnecessary.

Courage to be Honest

The pressure to agree to deadlines, estimates, and functionality from the business on most Architects is incredibly high. While the best Architects I’ve worked with have an affinity for the business and immense respect for the company’s business model, they aren’t “yes men”. Saying yes to an unreasonable ask simply to avoid confrontation is not only irresponsible but also earns you a reputation for being out of touch with the true effort to complete work. The best Architects set appropriate expectations for sustainable paces of work – and spend the time necessary to argue why their methods are important for the better of the products. They help explain why they cannot make progress on a task in terms the business understands.

Hopefully some of these topics will help you when evaluating your next hire or partner who will be architecting one or more of your applications. What other key traits make individuals successful in this position? Share your comments below! Thanks for reading.

Pay off your technical debt by preferring API clarity to generation efficiency

I’ve built the technical aspects of my career on combining technologies from Microsoft, that are easy to sell into enterprises that require the confidence that comes from their extensive support contacts and huge market footprint, with open source technologies that steer the direction of technology ahead of the enterprise curve – eventually to be embraced by them.

Microsoft has always provided powerful tools for developers in their Visual Studio product line. They focus on providing more features than any other vendor, and also having the flexibility to allows developers to design their software with the patterns that they find make the most sense to them. Because of this, the community is full of discussion, and there are always new ways to combine their technologies together to do similar things – but with quite a bit of variance on the architecture or patterns used to get them done. It can be daunting as a new developer, or a new member of a team, to comprehend some of the architectural works of art that are created by well-intentioned astronauts.

After I learned my first handful of programming languages, I began to notice the things that were different between each of them. These differences were not logic constructs, but rather how easy or difficult it could be to express the business problem at hand. Few will argue that a well designed domain model is easier to code against from a higher level layer in your application architecture than a direct API on top of the database – where persistence bleeds into the programming interface and durability concerns color the intent of the business logic.

In recent years domain specific languages have risen in popularity and are employed to great effect in open source projects, and are just starting to get embraced in Microsoft’s technology stack. A domain specific language is simply a programming interface (or API) for which the syntax used to program in it is optimized for expressing the problem it’s meant to solve. The result is not always pretty – sometimes the problem you’re trying to solve shouldn’t be a problem at all due to bad design. That aside, here are a few examples:

  • CSS – the syntax of CSS is optimized to express the assignment of styling to markup languages.
  • Rake/PSake – the syntax of these two DSLs are optimized to allow expressing of dependencies between buildable items and for creating deployment scripts that invoke operating system processes – typically command-line applications.
  • LINQ – The syntax of Language Integrated Query from Microsoft makes it easier to express relationship traversal and filtering operations from a .NET language such as C# or VB. Ironically, I’m of the opinion that LINQ syntax is a syntactically cumbersome way to express joining relationships and filtering appropriate for returning optimized sets of persisted data (where T-SQL shines). That’s not to say T-SQL is the best syntax – but that using an OO programming language to do so feels worse to me. However, I’d still consider its design intent that of a DSL.
  • Ruby – the ruby language itself has language constructs that make it dead simple to build DSLs on top of it, leading to its popularity and success in building niche APIs.
  • YAML – “Yet another markup language” is optimized for expressing nested sets of data, their attributes, and values. It’s not much different looking from JSON at first glance, but you’ll notice the efficiency when you use it more often on a real project if you’ve yet to have that experience.

Using a DSL leads to a higher cognitive retention of the syntax, which tends to lead to increased productivity, and a reduced need for tools. IntelliSense, code generation, and wizards can all cost orders of magnitude longer to use than to simply express the intended action using a DSL’s syntax when you’ve got the most commonly expressed statements memorized because the keyword and operator set it small and optimized within the context of one problem. This is especially apparent when you have to choose a code generator or wizard from a list of many other generators that are not related to the problem you’re trying to solve.

Because of this, it will reduce your cycle time to evaluate tools, APIs, and source code creation technologies based not on how much code your chosen IDE or command-line generator spits out, but rather the clarity in comprehension, and flexibility of that code once written. I am all for code generation (“rails g” is still the biggest game changer of a productivity enhancement for architectural consistency in any software tool I’ve used), but there is still the cost to maintain that code once generated.

Here are a few things to keep in mind when considering the technical cost and efficiency of an API in helping you deliver value to customers:

  • Is the number of keywords, operators, and constructs optimized for expressing the problem at hand?
  • Are the words used, the way they relate to each other when typed, and even the way they sound when read aloud easy to comprehend by someone trying to solve the problem the API is focused on? Related to this is to consider how easy it will be for someone else to comprehend code they didn’t write or generate.
  • Is there minimal bleed-over between the API and others that are focused on solving a different problem? Is the syntax really best to express the problem, or just an attempt at doing so with an existing language? You can usually tell if this isn’t the case if you find yourself using language constructs meant to solve a different problem to make it easier to read. A good example is “Fluent” APIs in C# or VB.NET. These use lambda expressions for property assignment, where the intent of a lambda is to enable a pipeline of code to modify a variable via separate functions. You can see the mismatch here in the funky syntax, and in observing the low comprehension of someone new to the concept without explanation.
  • Are there technologies available that make the API easy to test, but have a small to (highly preferred) nonexistent impact on the syntax itself? This is a big one for me, I hate using interfaces just to allow testability, when dependency injection or convention based mocking can do much better.
  • If generation is used to create the code, is it easy to reuse the generated code once it has been modified?

You’ll notice one consideration I didn’t include – how well it integrates with existing libraries. This is because a DSL shouldn’t need to – it should be designed from the ground up to either leverage that integration underneath the covers, or leave that concern to another DSL.

When you begin to include these considerations in evaluating a particular coding technology, it becomes obvious that the clarity and focus of an API is many times more important than the number of lines of code a wizard or generator can create to help you use it.

For a powerful example of this, create an ADO.NET DataSet and look at the code generated by it. I’ve seen teams spend hours trying to find ways to backdoor the generated code or figure out why it’s behaving strangely until they find someone created a partial class to do so and placed it somewhere non-intuitive in the project. The availability of Entity Framework code first is also a nod towards the importance of comprehension and a focused syntax over generation.

Refactoring to the realities of your delivery process


If you are a developer that writes code (yes, some don’t), you’ve inevitably been boxed into the “refactoring justification corner”. At some point you realize that a task you’ve been assigned affects more than just the code you thought it did, and that you’ve got a deeper design change to deal with.

Earlier in my career, when this would happen I was at product companies and we would just work overtime, get help from another resource, or be late. When this started to happen more often, we’d include “refactoring time” in our estimates. These were both insufficient approaches, and led to management looking at refactoring as “you didn’t do it right the first time”, and us feeling like we were doing something wrong. This is a manufacturing driven economy mindset, with fixed effort and materials, that doesn’t account for the reality of software projects. But we also had things to learn.

I see refactoring as falling into two distinct categories. How you react to it when it pops up, and which type of refactoring you are encountering has a big impact on your options.

Functional refactoring

The first of these, functional refactoring, occurs when code you originally thought didn’t need to be touched creeps into the picture to complete functional requirements. Basically, if you don’t do this refactoring, you can’t make the feature work. The tension here is stronger when you work directly for the company making the product, because being a dedicated resource you are usually thought of as an expert and held directly accountable for your actions on the company as a whole. You made a rough estimate, got into doing the work, and found the effect on design is bigger than you originally envisioned. Leaders who are uneducated as to the realities of the trade see this as you not doing your job correctly.

Since I started doing consulting 5 years ago, I was lucky to work for an employer (and still do) who understands that this is simply the nature of the beast, and we have processes in place to deal with it. When you aren’t intimately familiar with a codebase, or even a set of classes in a codebase you deal with all of the time, the rough estimate is just that – rough. As a development or project manager, part of your job is to instill the understanding into your culture that building software is not like building a house, as we are using materials that are unproven, trying to meet requirements that are in conflict with each others’ goals, using personnel with subjective evaluation of skills, and encounter architectural “works of art” at times. We include this opportunity for changes in complexity during the engagement as something clients must acknowledge as a possibility in our statements of work.

When this happens on a consulting engagement, we ask ourselves: can I do the extra work without disrupting the estimates for my next tasks? If so we just do it. If not, we schedule time to meet with the client, and explain the situation. At this point, we offer an estimate for the additional work, and give them a chance to either pay for the change, or opt not to do it. On large projects, we will occasionally give clients this work without additional fees, but it only happens once or twice, regardless of the size. Otherwise we get into a situation where many small changes add up to one big chunk of unpaid work.

At a product company, your process needs to be in line with realities of the trade in much the same way. Personnel should know that software development is one of the most unpredictable jobs in the world, and that they must be prepared to allow for extra time to complete tasks that turn out to have a greater cost. To do this properly, the organization must embed this into its culture, and developers have to feel safe that they can communicate this without being reprimanded. If development leads say it’s OK to communicate discovered extra effort, but they ridicule their developers every time they do it, they lose their respect and will have a hard time keeping their trust with future mandates or cultural changes.

The bottom line is that the business should be able to make factual decisions on what they want to pursue without expecting heroics to save them when unplanned complexities occur. If a task that was estimated to take 1 week blows up into something that takes 4, divide the new work up into smaller units and throw what can’t get done that iteration back onto the backlog. If the business can’t afford the overall effort completely, assign the developer a new task and throw it on the bottom of the backlog again. Agile and SCRUM processes allow for businesses to react quickly to market and technical changes – they do not predict the future or prevent development teams from encountering unknown complexity.

Cross-cutting refactoring

The second type of refactoring we encounter relates to nonfunctional requirements or patterns.

Before you start iteration one of your project, your business analysts or customer stakeholders should have requirements for nonfunctional aspects of the system. These include things like max response time (pages should load in under 2 seconds), throughput (the system should support 1000 requests per second to page x without causing other performance requirements to be exceeded), auditing (all changes to data should include who made the change, when, and what was changed), and archiving strategies (when do we purge old data). Testers should be able to help developers create automated acceptance criteria at the beginning of the project that run during later phases of your build process to ensure these are being met. You’ll need to create a separate environment that is a clone of production to measure these accurately.

Cross-cutting refactoring can also occur when patterns are not established prior to starting the project (or as part of the first few iterations). Something as crucial as your validation approach, error handling approach, data access strategy, dependency injection integration points, and security model should be established before any other features start getting built.

The reason for this early priority on patterns and nonfunctional requirements is that refactoring to meet cross-cutting requirements is one of the most expensive to encounter, because it typically impacts most code assets in one or more layers or silos of your system’s architecture. If you’ve already built 50 forms and now change (or come up with) your validation approach, you’ve got 50 existing assets that have to be massaged into following the pattern. The forms may have been implemented in a way that was simple to meet the initial requirements, but is not sufficient given the new cross-cutting ones. If you establish these cross-cutting requirements up front, the pattern is available to follow at the beginning of working on any task that encounters that pattern, and reduces the opportunity for waste through existing incompatible implementations.

As a development lead or manager, it is your responsibility to ensure that time is spent identifying cross-cutting patterns as early as possible on the project. Leverage your business analysts for the nonfunctional ones, and leverage your developers to identify the patterns. If you don’t do this, it is of no fault to your developers that as you introduce these into the backlog, “visible progress” in the project may slow to a standstill to implement them.

It is for this precise reason that nonfunctional requirements, and establishing of patterns, need to be backlog items that can be prioritized in their own right. This gives the business the power to decide if it is more important to accept credit card payments (functional), or to allow 1000 simultaneous requests (nonfunctional). Refactoring is an important tool to be used as necessary when you know what kind you’re dealing with, and why it has occurred. The better you get at understanding the causes for it, the more comprehensive of planning you can do to ensure a smooth delivery cycle as the iterations of your project progress.

Re-trusting check constraints in SQL doesn’t help for NULLABLE columns

I’ve been going through a large database for a client of mine and finding foreign key and check constraints that are marked “untrusted”. This happens when a relationship between two tables has some rows with foreign key column values that don’t have a match in the related table. When this happens, Microsoft SQL Server can’t use the query optimizer as well to lookup matches between the two tables when running queries. This results in sub-optimal performance.

Unfortunately I discovered today, if the foreign key column accepts NULL, you can still run a query to re-enable the check constraint without error, but it will still be marked as “untrusted” in INFORMATION_SCHEMA and will not benefit from the query optimization available to trusted keys!

Hopefully this helps someone out there to reduce the work you need to do when determining a data optimization strategy around dealing with existing untrusted checks.

Razor is sharp, but NHaml is still haiku for HTML 5 and JQuery

A colleague of mine told me recently about Razor, the view engine for ASP.NET MVC 3, and upon researching it and using it in a test project, I almost instantly came to compare it to NHaml since I’ve been using HAML for several years doing rails on the side. What I found is that though Razor is the best view engine I’ve seen from Microsoft (on top of a great version 3 of ASP.NET MVC – nice job guys), I still believe NHaml’s syntax is significantly better suited for HTML applications and even more so if they use JQuery.

Though Razor does a great job requiring minimal characters to insert executable code logic in between the markup it generates (and is basically equivalent in that respect to HAML), it does nothing for minimizing the amount of code you have to write to express the HTML outside of those logic statements. NHaml is simply superior here when you are generating HTML for this reason: it reduces markup to the minimal information needed to identify the JQuery or CSS selectors that elements have applied to them.

It does this because with NHaml normally you specify the name of an element without the angle brackets, but if the tag you want is a DIV element with an ID attribute, you can just specify the ID prefixed by a hash symbol and drop the DIV altogether.

<div id=”blah”></div>



This also works for CSS classes. This dramatically increases code readability because lines of code begin with the JQuery selector or CSS style name used to access them. When writing Javascript or CSS, locating these elements in markup is much easier. This is already on top of the fact that NHaml drops the requirement for closing tags.

Here’s an example that I think illustrates the point. Let’s say I have a CSS style sheet with the following styles. Don’t worry about the attributes in the styles themselves just look over the list of CSS selector names (container, banner etc.) and think of looking at this file day to day:

#container { width: 100%; }
#banner { background-color: blue; } 
.topic { font-size: 14pt; }

Now here’s some HTML markup styled with selectors in the above sheet in HAML:


    .topic Hello

Here’s how you would generate the exact same markup using Razor:

<div id=”container”>
<div id=”banner”>
<div class=”topic”>Hello</div>

Building on this let’s say we wanted to override the .topic style inline with some other styles, and throw in some inline JavaScript. Here’s HAML again:

  .topic { font-size: 14pt; }


    .topic Hello

and here’s Razor:

<style type=”text/css”>
.topic { font-weight: bold }
<javascript type=”text/javascript”>
<div id=”container”>
<div id=”banner”>
<div class=”topic”>Hello</div>

Hopefully you can see the HAML is much easier to read, and reduced in lines of code by about 15% in this example.

Here’s another great post from late last year that shows some comparisons of Razor and NHaml.

Using LABjs to get AJAX-loaded ready callbacks

I discussed in my previous post a function I’d came upon through searching that allows HTML that is appended to the DOM through AJAX which includes SCRIPT tags to wait until they have loaded before firing ready events. Upon testing it in Firefox, I found that it wasn’t working in some scenarios, so I began searching again and found LABjs. LABjs stands for Loading and Blocking JavaScript and allows you to chain loading of scripts together and end that chain with a function that gets called back. The nice thing is you can choose to use the word “wait” between load methods and it will block before loading the next, or if you omit the wait, the scripts will load in parallel, speeding up load time of your web application.

Here’s an example. Let’s say you have a site that loads a partial section of HTML from an AJAX request and you’re going to append it to the page, but it includes some scripts in it. Your script is dependent on two other publicly hosted scripts (twitter, Google maps for example) and can’t execute until those are finished loading. Rather than load them in your main page and increase startup time, LABjs handles it perfectly:

Old AJAX response HTML:

<script src=”” />
script src=”” />
<!– This can’t execute until the first two scripts have loaded, but not all browsers load these in order in a dynamic update to the DOM! –>
<script src=”myscript.js” />

Old myscript.js contents:

<script type=”text/javascript”>
$(document).ready(function() {
// This will fail if the first two scripts haven’t loaded yet
var script1Object = new PublicWebService1.APIObject();
var script2Object = new PublicWebService2.APIObject();

Replacing this with LABjs becomes:

<script type=”text/javascript”>
.script(‘myscript.js’).wait(function() {
var script1Object = new PublicWebService1.APIObject();
var script2Object = new PublicWebService2.APIObject();

Accessing dynamically loaded scripts with JQuery via AJAX

I’ve been working on a side project in rails that uses AJAX for all posts to the server with JQuery BBQ as state management (using the hashchanged event to provide bookmarkable views) and ran into a problem that can happen to anyone using JQuery. When AJAX calls are placed, they often retrieve HTML that contains script tags in it and then place this HTML into the calling page’s DOM. The problem is that usually one can use JQuery’s ready event to wait for the document to finish loading before accessing objects in those scripts. However when a Webkit based browser (such as Google Chrome) loads this content, it doesn’t wait for the scripts to load before firing the ready event. In my case I was implementing ReCaptcha for image verification of users signing up for my site and the JavaScript provided by them was getting inserted into my registration page via AJAX.

After massaging Google keywords to find the right query and paging through several results, I found a post with a great solution. Basically this code parses your HTML on the client for script tags and loads them separately before returning from the function. I’m now using it anywhere I inject code into the page that has SCRIPT tags via AJAX.

The Minimalist Development Movement

Over the past 11 years, .NET technology, fueled by Microsoft’s ability to deliver sophisticated development tools, has arguably ruled the enterprise business software landscape. Intellisense, drag-and-drop UI design, XML configuration, dependency injection, unit testing, IDE extensibility APIs for third party controls, continuous integration, and more attempt to ease the use of Agile and SCRUM processes as the Visual Studio IDE supports more and more of these features through wizards and templates. .NET started as a better version of Java, and as such inherited many of the powerful capabilities, but also limitations in that development and deployment approach.

However in the past several years a move towards a minimal development tool mindset has started to occur. This is made possible by the creation of more sophisticated frameworks that establish conventions and set constraints on how to go about implementing things. These constraints reduce the number of API calls to remember, and the number of front end technologies that a typical developer needs to be fluent in. As a byproduct, the required capabilities of development tools is also reduced. Rails, which led to ASP.NET MVC, and JavaScript technologies like the MVC framework being built on Coffee Script as well as mobile frameworks like Titanium Appcelerator all take this approach. They provide the framework and API, and you use the development tool of your choice. Because the framework limits what you can do, but elegantly provides 80% of what you need in most applications, you don’t need an IDE that does so much for you. Extreme minimalists use VIM, Emacs, or TextMate; and Aptana is a popular one with first class support for CSS, HTML, HAML, Rails, JavaScript and many other minimalist technologies that might be a little more approachable to a seasoned .NET developer.

Visual Designers for 20%

However, to take part in this new shift requires a different mindset. What if you had to do all of your user interface development without graphically previewing it first? A dirty little secret in many Microsoft shops is that we rarely use the UI designers anyway. Clients and customers are always asking for features that negate the productivity enhancements touted by RAD design, and force us to the code to do more sophisticated things.  I’ll argue that this is due to an inferior separation of concerns in ASP.NET, and not simply because you’re doing something more complicated. If your framework requires you to break patterns to do something complex, how good of a framework is it? When a development tool only really shines for the minority of projects, your on the losing end of the 80/20 rule. When you design tools to focus on letting developers visually design things, you are continuing to treat UI assets as single units that encapsulate their behavior, presentation, and data. Modern frameworks that separate these concerns make it difficult (if not impossible) to visually represent things as they appear at runtime, but the tradeoff is an increase in productivity due to patterns that decouple responsibilities.

Interpretation vs. Compilation

What if you don’t compile your project to test it out? There are legitimate applications that require compilation due to performance constraints. But if quality is the concern, the efficiency enhancements afforded by these frameworks coupled with a disciplined automated testing approach negate this concern.

Document what you’ve built instead of what you want

What if you don’t create requirements documents, but rather rapidly implement and write tests to serve as the documentation for what the system currently does? We already know from years of SCRUM and Agile debates that documenting a system up front more often than not results in bad designs, slipped deadlines, and stale documentation. Most customers and clients are not system analysts and as such can’t be expected to communicate all of their needs on the first try. A picture is worth 1000 words, and we’ve all been in the meeting where the customer advocate is shown what was built based on their design and they realize things missing that they didn’t communicate. Doesn’t it make sense to use a development process that encourages and adapts to this situation instead of fighting it?

Contrary to popular belief, to pull this off one needs to be a better developer, who follows patterns even more than before. Developers also need to communicate with stakeholders more, and incrementally deliver *tested* features. Increasing the ability for a developer to communicate has all kinds of other benefits as well, such as the ability to clarify requests, think outside of the box, and generally be more pleasant to work with.

If the thought of letting your tests provide your documentation sounds crazy, tell that to Sara Ford.

Get better at learning your framework instead of fighting it

We’ve all been in the code review where someone implemented an API call that’s already in the .NET framework. If we’re honest with ourselves as developers, we really don’t keep much of the .NET technology stack in our head, we just know how to use Google well. If we were able to reduce the number of patterns and APIs used in our solutions, we could retain that knowledge and know the best way to leverage the framework to do what we need to do instead of fighting it. ASP.NET MVC and Rails both exhibit this, and I’ll argue Rails does a better job. ASP.NET MVC won’t complain if you make the mistake of trying to throw a bunch of logic in your view, where in Rails you really have to fight the framework to instantiate classes here. As DHH says “constraints are liberating” (start at 3:50 in).

The challenge

If you could challenge yourself with one technical hurdle this year, would you rather learn another API? Perhaps a way to interface with a new business system? Or would you rather experience a shift in your approach to development that has long lasting, measurable effects on your ability to deliver rapid value and makes you more attractive to established companies and startups? To do so requires several things.

  1. Approach these new patterns and capabilities without attempting to compare them to existing methods. As humans we love to do this, but often we get caught up in analysis paralysis or think we’ve “got it” when we grasp just one of the many innovations in these newer frameworks.
  2. Do not declare competence with these frameworks until we’ve actually grasped them in their entirety. Learning Rails without understanding caching, convention based mocking of tests, or the plugin architecture is like learning C# but ignoring generics and lambda expressions.
  3. Don’t try to figure out how to shim legacy .NET patterns into these frameworks. You wouldn’t expose a low level communication protocol through a web service or REST API where clients are expected to allocate byte arrays, so why would you expect to figure out how to host a third party ASP.NET control in MVC or access a database using T-SQL from an MVC view. Sure, you can do it, but you’re missing the point. And that is to embrace new patterns and learn to abstract the old way of doing things. We’ve been doing it with .NET for years, now let’s see if we can do it when legacy .NET patterns are what we’re abstracting.

The state of Inductive User Interface (IUI): here to stay

Back in 2004 I left Rockwell Software (a subsidiary of Rockwell Automation) to join a small company that needed technical and design leadership for a potential touchscreen application for the pharmaceutical industry. As part of my design, I had to do research on usability of touchscreen interfaces. This led me to study panel designs in Japan and the manufacturing industry and eventually Inductive User Interface (IUI). IUI is a user interface design approach that emerged with Microsoft Money 2000 and has infiltrated many of the software applications we use today, most notably Windows itself.

IUI emerged as a realization that there are two different classes of use of software. The first is through deduction which is common to power users such as myself that are typically programmers, IT personnel, or other more technically sophisticated roles in an organization. When a new application is installed on our computer, we typically flip through the menus at the top of the window, or hover over the tooltips of the toolbar buttons to find out what was available. Sure there is online help, but we can usually deduce what to do simply by exploring these common controls. Deductive user interfaces are better for people who use the same application over and over again. Once a graphic artist has used Photoshop or Illustrator for a while, they want many actions available in the same place and in the same screen. An IT administrator also needs a single page they can use to administer a user account and modify anything about them.

Microsoft realized that the other way most use software is infrequently. The most common example of this is the control panel in Windows. Many home users of Windows rarely use the control panel and so when they go in to configure settings, they don’t remember anything at all about the screens. IUI makes the user interface’s purpose on each screen explicit – there is no exploration or guesswork involved. It also forces UI designers to break the software up into more steps or screens, and as a result will slow the process down some for power users. However Microsoft (correctly) determined that the 80/20 rule applies here – 20% of users of their OS are power users.

This screenshot from Microsoft’s IUI guidelines whitepaper (published 2001 but still very relevant as I’ll get to) illustrates the paradigm shift well:


Microsoft Money 99 “Account Manager” screen

The first thing a user thinks when reaching the page is “what can I do here?” The title says “Account Manager” but the primary purpose of the screen isn’t clear. There are buttons on the bottom of the screen that allow the user to do something, but they are disabled until an item from the list is selected. Additionally, there’s no easy way to get back to the previous page or a “home” screen of sorts.

Here’s Money 2000’s version of the same screen:


Microsoft Money 2000 “Pick an account to use” screen

Here the purpose of the screen is clear – “Pick an account to use”. Once the user selects an account from the hyperlinks on the right (which still show the rollup dollar information) they are then presented with a new page with links that allow them to do things with that account. Links are available allowing them to navigate back to the previous page. Additionally, IUI allows for a “task panel” of actions on the left or right side of the screen that are related to account management but not a single account itself.

Read the full whitepaper for more information about the background of IUI. Interestingly, Microsoft’s Windows Vista, and more recently Windows 7 User Experience Interaction Guidelines don’t mention IUI explicitly but the common controls of the operating system are designed for these exact types of user interfaces. Specifically, check out Silverlight and WPF’s navigation framework that allows you to create separate pages easily with built-in browser-like “back” and “forward” buttons. A look at any Windows control panel applet, many screens and dialogs in Visual Studio 2010, as well as the task panels in Office show that IUI is a powerful tool throughout Microsoft’s suite. Other vendors like Intuit, Apple, and Google regularly use IUI in store checkouts, configuration screens, and other infrequently used parts of their applications.

I’d encourage you to learn the IUI design principles and adopt them to the way you design software for your clients to give them more usable interfaces. Here are a few of the ones I use the most (and some I’ve come up with on my own).

  • Select a title that asks the user to do something.

    • Don’t use the words “and” or “or”. If this happens – you need to design two screens!
    • For screens that allow the user to review something, make the title “Review the details of this <object>”.
  • Include an instructions line below the title that gives additional information about what’s presented. For example in the Money 2000 screen above this line might state “Account totals are listed to the right of each account. To perform more actions on the account, click on its name.”.
  • Avoid horizontal scrolling (and vertical if on a mobile device or tablet/touchscreen!).

    • If you have a grid, show only the most important columns of info about an object being viewed.
    • Allow the user to select the items in the grid to get more information about them.
    • Alternatively, don’t use a grid and use custom XAML or HTML markup that creates rows of detail/header information.
  • Place “global” actions (like “Home” or “Logout”) in a dedicated panel at the top or bottom of the screen where they are always available.
  • Make mockups in Balsamiq, Sketchflow, or another low-fi mockup application of the main screens

    • Once you have the mockups go through the flow of the application and look for opportunities to connect related screens through related actions on the task panel.
    • Identify user roles that need access to each page and/or link and design your permission system around this.
  • Designs screens after domain objects in the business’ natural language e.g. Orders, User Accounts, Patients, Subscriptions etc. Then design the related screens for each domain object.
  • When you have more than 5 actions that can be taken on an object selected in a previous screen, break the actions up into categories and make the user select a category first. These are “category navigation pages” and I’ve found numerous cases where they make an application highly usable.

IUI is not a perfect fit for every application, but many product managers and power users at companies who are designing new versions of their applications, or new products altogether, do not have a background in modern UI design and are unaware of this approach. They can also be tainted by previous experience and knowledge, leading them to believe their user base doesn’t need things to be broken up into “so many screens”. Use the research Microsoft has provided and examples of modern applications as leverage to open up the discussion and enable them to take part in designing the software in a more usable fashion. Whenever I’ve had the opportunity to use this approach, the results are often eye opening to everyone involved and can have a dramatic effect on users understanding the big picture of their flow of work.

I’m glad Microsoft is embracing convention over configuration

I read Agile Web Development with Rails while visiting San Diego a couple years ago and was blown away by how well put together of a framework it was. What the book helps you realize is that if you follow certain naming conventions for your code artifacts (in this case ruby source files), it automatically wires up communication between the different architectural layers of your application.

With the recent release of ASP.NET MVC 1.0, which is Microsoft’s answer to ruby on rails, Microsoft has provided what seems to me to be a simpler approach to web applications and adapts to testability better than the oft-complicated event model of existing ASP.NET web applications.

I also downloaded Silverlight 3 Beta, Expression Blend 3 Beta, and Microsoft’s Rich Internet Application (RIA) toolkit preview. The new version of Silverlight has a ton of controls, and I love that editable forms with built in wiring up to validation are included out of the box!

When you have the RIA toolkit installed, you can create a data model in Entity Framework in your web application, create a special link to it in your Silverlight “client” project, and you can wire up similarly named domain objects to databind to your Silverlight project and the databinding hits the server using REST transparently. It’s very slick.

%d bloggers like this: