Four reasons why M&E software for large programmes fails

Government and NGOs don’t always have a great track record with technology projects.  Sadly this affects M&E systems as much as other areas.  However, it’s also clear that done well technology can make a big impact.  This may come in the form of cost savings (printing and transporting paper around), time savings (saving time spent transcribing data or manually aggregating data) and credibility (being able to quickly access and present data, aggregated at different levels with access to underlying evidence that backs it up).

Often it can feel like you are stuck between a rock and a hard place.  There is a risk to not making use of technology in your work.  There is also a risk of trying to use it, spending a lot of money and not seeing the benefits you expected.

This post highlights four reasons why M&E software for large programmes fails.  It's not an exhaustive list, just an attempt to document some lessons and learning from our work and experience with partners implementing international development programmes in a range of sectors and countries.  While some of these areas may also apply to smaller organisations and projects, they are not intended to act as a guide in these contexts where there are a different set of challenges to keep in mind (and mistakes to learn from).

(1) Software doesn’t include workflow linked to project or site level reporting

Central to most large programmes is the concept of a reporting cycle.  This may be monthly or quarterly and means that monitoring data for that period must be collected and submitted by the end of that cycle.  The programme then moves onto the next cycle.

M&E software used by large programmes must understand this concept of reporting periods.  In practice this means that M&E software must:

  • Be configurable to match your reporting cycles (monthly, quarterly or otherwise)
  • Include workflow that control who can enter data, who can review and approve data
  • Lock data for a reporting period once it has been entered
  • Close a reporting period and open the next one
  • Store data for each reporting period as a separate 'slice' of the same data, coded by the time period

If your programme uses software that lacks these features then it means that anyone with access to enter data can easily change data that has already been entered for a previous period.  This can create enormous data quality issues and lots of difficult questions from donors asking why indicator data you already reported on has changed from the number reported before...

However, the level at which this happens is also critical.  Imagine that your programme works across 100 different sites.  Staff on each site are responsible for entering data for their own activities.  Your software must:

  • Control that people can only report on their own projects, not enter data for other projects
  • Manage the workflow related to entering and reviewing data at the project level
  • Lock data at the project level, not the programme level
  • Close a reporting period at the project level, not the programme level

When we started work on one programme, the system that our partner was already using lacked these options.  This meant that each month, on a specific date, the entire system was closed and locked for that reporting period.  If the manager for one project was even a day late entering their data then they were blocked.  Short of re-opening reporting across the entire programme, there was no way to capture this data after the reporting period had closed.

These measures may seem overly harsh and controlling.  In smaller programmes (say 10 to 20 projects) they are not as important.  However, with programmes operating on larger scales (say 200+ projects) they are essential.  At that scale the work required to untangle problems become overwhelming.  The best approach is to prevent them from the outset.

(2) Failure to collect data at the level of the activity

Programme M&E systems often use data collection tools that collect data that's already been aggregated.  For example:

  • Weekly or monthly visits to a clinic
  • Details of people that have been trained in the last month
  • Summary of households visited and support provided

This approach is often favoured due to the challenges of collecting data at the activity level.  However, it can result in a number of serious problems.

First, without further support or guidance, it means that field staff are often left to develop their own tools for recording data as they carry out activities.  These tools may differ from team to team.  They may be based on different definitions of, or understandings of, the data that needs to be collected.  The information may be stored and organised in different ways, making it hard to find.

This data represents the primary evidence that your programme has completed the activities it claims to have done.  In cases of a data audit your programme may need to show this data to back-up claims made by indicators that are measured from this data.  Accessing the records to prove this can be very challenging.

To tackle this problem your system should include data collection tools that are directly linked to the activities being implemented.  These could be manual (i.e. paper) tools or they could be implemented via mobile devices that share data as and when an Internet connection is available.  Ideally they should include the option to capture photographic or video evidence that an activity has taken place.

Watch out for our forthcoming guide to Process Mapping.  This documents aims to help programme managers document their programmes, including tools to collect data at the activity level.

(3) Focus is on M&E needs, not management needs

I’m going to say something controversial here.  I’m open to being challenged if you disagree.  However, this seems intuitive to me.

Programmes should only collect the data needed to make operational decisions at each step of delivery.

Programme managers should consider what information is necessary and available at each stage of implementation.  Data not needed for decision making should not be added in.

What does that mean for M&E data?  To me it makes sense that activity and output level indicators should be derived from data needed for operational management of the programme activities.  Outcome and impact data are typically collected in a different way to this data.  Either via surveys or as part of a baseline, mid-term or end-line evaluation.

Regardless of whether you agree with this point or not, one thing is clear.  The balance on the type of data collected is often driven too much by M&E needs, not by management needs.  Here are some examples of management data that we think programmes should be collecting:

  • Which projects or sites are scheduled to start and when?
  • Which projects have failed to reach a particular implementation milestone by the agreed date?
  • What is the average time taken to implement each stage of a project?
  • How does each individual project compare against this average?

In our experience this data is typically collected manually and collated into Excel.  It’s rarely something that is an integral part of the data collected by a programme.

(4) Build it and they will use it?

By far one of the biggest factors contributing to failure is the lack of a strong adoption plan and appropriate support.  With the work that goes into implementing a complex system it’s easy to set your sites on the launch as the end date for the project.

However, that is when the real work begins.  In our view planning and preparation for this starts from the beginning of the project.  Following are some things to consider.  Some obvious, others not so.

Prepare the ground

Before getting started here are some key things to consider:

  • Identify a senior champion - who has the authority to champion the system at a high level?  Ideally this should be the programme director or someone at that level
  • Identify the technical owner - this is someone who will develop a much deeper understanding of the system and how it works.  After launch they'll play the role of administrator.  Before launch they will mediate a range of small questions that come up relating to the detail.
  • Discuss change management at the outset with key staff and listen carefully to their concerns or issues.  It's crucial to explain to everyone what is being planned.  This means discussing concerns and explaining what kind of changes in people’s work this will involve.  What will they need to start doing?  What can they stop doing?
  • Identify supporters and skeptics early on and use the implementation process to work with them.  What ideas or concerns do they have.  What problems do they have that the system could help to address?  What does it really mean to solve this problem?  Does it involve actions or steps beyond what the system itself can do?
  • Engage a broad and representative group at the outset in planning - process mapping
  • Clarify roles and responsibilities.  In existing programmes these may already be clear.  But how will these relate to the system?  Make sure any new responsibilities are incorporated into people's job descriptions and workplans.  The senior champion has a key role to play in ensuring that this happens.

Proof of concept

As the project gets going it's really important to maintain the momentum.  This keeps people engaged and helps build ownership of the project.  Once the requirements have been documented, we like to deliver an initial proof of concept within a few weeks.  This gives people a clear view of what is coming and enables them to test forms and review the workflow.

Consider setting up a feedback group to engage different stakeholders in a systematic way.  This helps get more structured feedback from relevant people.  Consider also how you will track and manage issues raised during feedback.  We use JIRA to log, track and group issues that are raised.  This makes it easy to prioritise issues and schedule them.

At this point you should also begin preparing for the launch and how things will change as the system goes into use.  We like to focus first on building an understanding of existing management processes.  How will these change to utilise management data accessible from the system?  What kind of reports are needed to support these meetings?  Milestone reports for project status?

Phased launch

It's important to communicate clearly around the launch schedule.  There may be extensive work needed once the system is 'ready' before all teams can start using it.  

Take historical data for example.  Does this need to be entered?  Back to when?  Who will enter it and how long will it take them to do it?  If you have large volumes of data on paper it may easily take several weeks to enter this manually.  While it may seem like a big chore, it is also an excellent way to test the system with real data with a small group of people.

If you don't need to enter historical data then consider a pre-test instead.  This typically includes a smaller group and focuses on entering real data, but not necessarily to keep.  It's another way of testing that the system works as it should.

In our experience both these approaches typically uncover small but critical changes that are needed.  A key field that was forgotten in the documentation.  A question that seemed clear, but confuses people.  These types of changes are important to identify and address early on.

Once pre-test issues are addressed then consider scheduled a phased rollout.  How you phase and how long each phase lasts depends on the capacity of your teams and the logistics involved.  The aim is to transition people at a pace that makes sense given your circumstances, not to try and make everyone change at once.  This may require running two systems in parallel while people change over.

Follow through

Once your training and rollout is complete then the adoption process should change to focus on supporting management.  This means regular (at least monthly) meetings to review management reports.  If well designed these will help track adoption, showing data like:

  • Which projects have been registered?
  • Which stage they have reached?
  • Has data for each stage been entered?
  • Has data been reviewed and approved?
  • Is project status being updated?
  • Are there any users that have not logged into the system?

These type of reports may need to focus on different levels of the programme, depending on the management structure.  However, they will quickly help managers to identify people or teams that need more support or training to use the system effectively.

We recommend linking this type of management approach to clear targets and milestones for system adoption.  That way the entire team knows what you are working towards and managers can clearly track this, focusing their attention on the people that need the most support.