Thursday, December 14, 2006

Pragmatic Software Design

Is there such a thing as minimal design? In my experience there is good design, bad design or no design at all. The easy throw-away line that minimal design and coding is the way to go just does not wash with any rational test. The problem is how to identify and eliminate poorly designed bits of code while recognising and keeping well-designed pieces of code.

How do we identify bad code? The easiest way is to follow a set of rules and match the code against those rules. Design antipatterns are a good place to start. Martin Fowler and friends in Refactoring speak of code smells and have a taxonomic breakdown of the smells that emanate from bad code and the steps to be taken to rectify the problem.

Can we recognise good code? If we start with design patterns, enterprise patterns, language-oriented idioms and set the goal of explicitly naming the patterns and idioms in our code then we have a metric of pattern density, or as I like to call it, the emergence of pattern poetry.

The emergence of structure from goal-seeking behaviour among autonomous agents is a characteristic of all distributed systems including software developers. Some might call it, with apologies to John Vlissides, pattern hatching.

What can be done to rectify poor code? The baby steps of refactoring are a good starting point for improving code structure in the small. The detail of implementation can be directly improved by making incremental changes to the code. In order to ensure the continuation of correctness in the behaviour of the code it is necessary to have unit tests that exercise each unit of code and confirms its invariants are satisfied as expected.

Test-Driven Development in concert with Mock Objects for unit testing help to ensure that the expected behaviour of each unit can be modelled, tested and implemented. One of the corollary conditions is that the contract for each unit is expressed as an interface or abstract base class so that production and mock implementations can be provided for units to test against.

Unit testing, glass box or white box testing is concerned with assuring the internal behaviour of each component is correct. Integration testing or black box testing is distinguished as concerned with validating the external behaviour of various integrated sets of components against the expectations of the customer. Acceptance tests and integration tests are synonyms for tests that are specified by the customer or are derived from the user requirements, otherwise known as the system definition.

It is perfectly clear to most developers that unit testing is sufficient to ensure that components will successfully integrate and interoperate together in order to fulfil the purpose for which they were designed. Usually, each component has been assigned a number of specific functional requirements, use cases or user stories during requirements analysis and negotiation, iteration planning or the planning game. The problem is this crystal clear conclusion is completely wrong.

A number of units of code cannot just be stitched together and expected to work as anticipated. Each component or unit has its own invariants, constraints and boundary conditions. Some of these match globally across all of the other components while other are distinct constraints. The picture is one of trying to stitch together a large area of fabric from different sized and shaped pieces that each have their own size, shape, material and thickness.

For the mathematically minded, the situation is analogous to a analytic function in complex space in the sense of satisfying the Cauchy-Riemann equations for the existence of higher-order derivatives and so demonstrating smooth continuity.

Most system partitionings into subsystems and components result in units of code that are analogous to distinct regions of complex space where each one of the functions may be locally analytic. However there are mathematical difficulties at the boundaries of each of these regions trying to stitch the functions together.

The problem cannot be remedied by small, incremental, baby steps to refactor the code. Fowler's large refactorings cannot be done in baby staps at all. Continuous integration that requires code to compile against all the most recent versions in the source repository of other components is not possible.

Such a backward-looking constraint causes the design of the code to remain in a local minimum in the phase space of design possibilities with little or no chance or breaking out into a quality design.

Evidence for progress is a continually breaking build because an advance or change is made to one or more component interfaces causing compilation failure, or change in one or more component implementations causing unit or integration test failures.

The need for unit and integration tests is paramount for the possibility of successfully integrating the diverse components. And for interface contracts to allow the unit and integration tests to be coded in the first place using mock objects.

The result is confidence in the code, the unit tests and validation of acceptance and integration tests. This is a pragmatic approach and there is nothing minimal about it.

0 Comments:

Post a Comment

<< Home