Most IT professionals are familiar with the disturbing frequency of large project failures. Famous boondoggles in the public space such as The Big Dig or more commonplace examples such as a failed ERP implementation have a hauntingly familiar theme. These “megaprojects” are of a scale so large that they overwhelm the ability for humans to effectively manage them. Classically, the end result is cost and schedule overruns, scaled back deliverables and customer dissatisfaction.
In their book Megaprojects and Risk: An Anatomy of Ambition, Bent Flyvbjerg, Nils Bruzelius and Werner Rothengatter, examine some historical megaproject failures involving public infrastructure. The notable efforts they investigate include The Chunnel, the Vasco de Gama bridge in Portugal, the German MAGLEV train and Denver’s International Airport. In their evaluation of these projects, they identify a number of common themes:
- Significant cost overruns beyond original projections
- Infrastructure utilized at far below original goals
- A consortium of interested pro-project parties (politicians, contractors, activists) that overshadows any public input to the project
One thing that can’t be ignored when examining large projects is the role of complexity. Complexity can be thought about along two dimensions. First, any large effort with a number of social factors can have issues that are difficult to predict. Political dynamics, personal relationship issues and cultural conflicts can lead to unexpected challenges and outcomes. This social complexity is exacerbated by the size of the effort. Unfortunately, complexity, by its nature, becomes geometrically worse as the number of interactive elements grows.
Let’s look at a sample IT project within a large enterprise. Consider the different elements and relationships that are all part of the underlying structure of the effort. Each of these “items” represents a potential failure point for the project. Some examples of these potential problems are as follows:
- A critical task is not finished on time (simple execution failure)
- A poor relationship between a business analyst and customer results in changed requirements
- Internal issues at a vendor result in late hardware deliveries
As the project grows in size, the number of possible interrelationships grows exponentially. As the project reaches “mega” scale, the number of interrelationships becomes astronomical, leading to the inevitable failures that we constantly observe.
There are a number or steps that firms can take to reduce the complexity of IT projects. First, and most obviously, firms should look to limit the size of individual projects. While there is no specific rule here, the bias should be towards smaller, more modular efforts. Where there are large scale requirements, it is better to “chunk” these efforts into discrete, measurable phases. Utilizing agile development methodologies can help create a more iterative process, with smaller, more manageable deliverables. These rapid development techniques also deliver progress faster to a firm, ensuring that deliverables aren’t “stale” by the time they reach the customer.
While modular development and delivery can create simpler project execution, it is also helpful to change some traditional thinking regarding project approvals. Historically, enterprises have used a top down approach for planning and scoping efforts. This unfortunately results in larger projects with the scoping done away from the folks that will actually implement or benefit from the effort. Distributing budget and approval responsibility to smaller teams that work directly with the customers is a better formula for success. This strategy provides the following benefits:
- Greater likelihood of avoiding “land mines”. Those folks closest to the process have a better understanding of potential issues
- Greater buy in from teams that are implementing the project
- Better alignment with customer needs
In conclusion, to avoid the unforgiving bite of complexity, keep your projects small, fast and close to the customer.