Database Case Study Project Failure

Case Study

A comparison of two sets of ratings for the same cohort of projects showed that 74% of projects with satisfactory designs also had satisfactory outcomes years later.


The quality of a project's design is a strong indicator of its likelihood of success.

The World Bank's Quality Assurance Group (QAG), DPMG's predecessor, compared two sets of ratings for the same cohort of projects. One set was based on evaluations of the design quality of each project. The other ratings measured whether these projects had achieved their development objectives by the time they closed.

Of the 385 projects evaluated, 74% of projects with satisfactory designs also had satisfactory outcomes years later. By contrast, projects rated less than satisfactory on the basis of their designs were twice as likely as others to fail.

The study also showed that design flaws could be corrected if projects were evaluated early during their implementation. More than half of the projects found to have poor designs were turned around during supervision, and those that were evaluated early performed better than those evaluated later.

Four design problems were highly correlated with risk of failure.

Overly complex project designs
Some project objectives are too complex and ambitious for the institution or government to manage. Donors’ (and governments’) enthusiasm tends to expand the scope of a project beyond the capabilities of weaker governments. This complexity takes many forms:

  • Multiple sub-sectors, such as the primary, secondary, and tertiary levels of education
  • Multiple beneficiaries, such as disabled children, street children, migrant children, girls in ethnic minority areas, and illiterate adults
  • Multiple reform objectives, such as access, quality, equity, and efficiency.

Complex projects place heavy demands on implementing entities that often have limited capacities. The mismatch between complexity and capacity occurs most frequently in low-income and fragile states.

Poorly formulated causal links between inputs, outputs, and outcomes
Poorly formulated objectives set up targets that are impossible to achieve. For example, the goal of one project was to produce more trained engineers. However, the project focused on construction of new training facilities that would not have produced any graduates by the end of the project.

Poorly selected indicators of success
Poorly selected indicators of success leave all parties to the project flying blind. For example, one project gave four indicators of its objectives. Three of these were outputs (such as the number of vaccination doses procured), not outcomes (such as the number of children vaccinated). In the same project, one outcome measure was inappropriate: it applied to the entire country, not to the sub-regions addressed by the project.

Premature approval
Sometimes projects are approved before they are ready. Examples include infrastructure projects that begin before the bidding documents for the first year of work have been prepared or projects that begin before baseline data for indicators of intended outcomes have been collected.

When projects enter the portfolio too soon, project teams may spend the first year or longer addressing issues that should have been handled before the project started. In these cases, the project is not likely to meet its objectives by the time resources have been spent.

Overview – Hershey’s ERP Implementation Failure

When it cut over to its $112-million IT systems, Hershey’s worst-case scenarios became reality. Business process and systems issues caused operational paralysis, leading to a 19-percent drop in quarterly profits and an eight-percent decline in stock price. In the analysis that follows, I use Hershey’s ERP implementation failure as a case study to offer advice on how effective ERP system testing and project scheduling can mitigate a company’s exposure to failure risks and related damages.

Key Facts

Here are the relevant facts: In 1996, Hershey’s set out to upgrade its patchwork of legacy IT systems into an integrated ERP environment. It chose SAP’s R/3 ERP software, Manugistic’s supply chain management (SCM) software and Seibel’s customer relationship management (CRM) software. Despite a recommended implementation time of 48 months, Hershey’s demanded a 30-month turnaround so that it could roll out the systems before Y2K. Based on these scheduling demands, cutover was planned for July of 1999. This go-live scheduling coincided with Hershey’s busiest periods – the time during which it would receive the bulk of its Halloween and Christmas orders. To meet the aggressive scheduling demands, Hershey’s implementation team had to cut corners on critical systems testing phases. When the systems went live in July of 1999, unforeseen issues prevented orders from flowing through the systems. As a result, Hershey’s was incapable of processing $100 million worth of Kiss and Jolly Rancher orders, even though it had most of the inventory in stock.

This is not one of those “hindsight is 20-20” cases. A reasonably prudent implementer in Hershey’s position would never have permitted cutover under those circumstances. The risks of failure and exposure to damages were simply too great. Unfortunately, too few companies have learned from Hershey’s mistakes. For our firm, it feels like Groundhog Day every time we are retained to rescue a failed or failing ERP project. In an effort to help companies implement ERP correctly – the first time – I have decided to rehash this old Hershey’s case. The two key lessons I describe below relate to systems testing and project scheduling.

ERP Systems Testing

Hershey’s implementation team made the cardinal mistake of sacrificing systems testing for the sake of expediency. As a result, critical data, process, and systems integration issues may have remained undetected until it was too late.

Testing phases are safety nets that should never be compromised. If testing sets back the launch date, so be it. The potential scheduling benefits of skimping on testing outweigh the costs of keeping to a longer schedule. In terms of appropriate testing, our firm advocates methodical simulations of realistic operating conditions. The more realistic the testing scenarios, the more likely it is that critical issues will be discovered before cutover.

For our clients, we generally perform three distinct rounds of testing, each building to a more realistic simulation of the client’s operating environment. Successful test completion is a prerequisite to moving onto to the next testing phase.

In the first testing phase – the Conference Room Pilot Phase – the key users test the most frequently used business scenarios, one functional department at a time. The purpose of this phase is to validate the key business processes in the ERP system.

In the second testing phase – the Departmental Pilot Phase – a new team of users tests the ERP system under incrementally more realistic conditions. This testing phase consists of full piloting, which includes testing of both the most frequently used and the least frequently used business scenarios.

The third and final testing phase – the Integrated Pilot Phase – is the most realistic of the tests. In this “day-in-the-life” piloting phase, the users test the system to make sure that all of the various modules work together as intended.

With respect to the Hershey’s case, many authors have criticized the company’s decision to roll out all three systems concurrently, using a “big bang” implementation approach. In my view, Hershey’s implementation would have failed regardless of the approach. Failure was rooted in shortcuts relating to systems testing, data migration and/or training, and not in the implementation approach. Had Hershey’s put the systems through appropriate testing, it could have mitigated significant failure risks.

ERP Implementation Scheduling

Hershey’s made another textbook implementation mistake – this time in relation to project timing. It first tried to squeeze a complex ERP implementation project into an unreasonably short timeline. Sacrificing due diligence for the sake of expediency is a sure-fire way to get caught.

Hershey’s made another critical scheduling mistake – it timed its cutover during its busy season. It was unreasonable for Hershey’s to expect that it would be able to meet peak demand when its employees had not yet been fully trained on the new systems and workflows. Even in best-case implementation scenarios, companies should still expect performance declines because of the steep learning curves.

By timing cutover during slow business periods, a company can use slack time to iron out systems kinks . It also gives employees more time to learn the new business processes and systems. In many cases, we advise our clients to reduce incoming orders during the cutover period.

In closing, any company implementing or planning to implement ERP can take away valuable lessons from the Hershey’s case. Two of the most important lessons are: test the business processes and systems using a methodology designed to simulate realistic operating scenarios; and pay close attention to ERP scheduling. By following these bits of advice, your company will mitigate failure risks and put itself in a position to drive ERP success.

Our team has been leading successful ERP implementation projects for decades – for Fortune 500 enterprises and for small to mid-sized companies. Our methodology – Milestone Deliverables – is published and sells in more than 40 countries.

Learn about our ERP implementation services here.

Check out our ERP implementation project management book here.

Contact us here.

This article was originally published by Manufacturing AUTOMATION on July 30, 2010.

0 thoughts on “Database Case Study Project Failure

Leave a Reply

Your email address will not be published. Required fields are marked *