[Audio] Welcome to Module 5 of the TALLHEDA training programme. The title of this module is Monitoring, Control and Performance Tracking. Module 4 was about building the financial plan , the budget, the forecasts, the structures and methods for translating strategy into numbers , then this module is about what happens next. It is about staying in control once implementation begins..
[Audio] The opening line on this slide captures the most common financial failure mode in project management: "We only discovered the overspend when we prepared the reports." how many times have you been in that situation, or seen a colleague in it? A budget problem that had been building for months — quietly, steadily — revealed only when someone sat down to fill in the periodic report? By that point, the options are dramatically reduced. The activities that caused the overspend are long completed. The invoices are paid. The partners have moved on. All you can do is explain what happened, not fix it. This is the fundamental difference between reporting and monitoring. Reporting is periodic and backward-looking — it documents where you have been. Monitoring is continuous and forward-looking — it tells you where you are heading, and gives you the opportunity to change course while there is still time. In Digital Agriculture projects specifically, this gap is especially costly. Field trials run on agricultural calendars that bear no relationship to administrative reporting cycles. A sensor deployment that fell behind in April will not appear in a periodic report until month eighteen, by which point the downstream cascade of delays — disrupted data collection, deferred analysis, postponed publications — has become structural rather than manageable. Monitoring converts this dynamic entirely. Instead of the single end-of-period comparison, it asks three questions continuously and in real time. First: are we spending resources at the rate we planned, for the outputs we expected? Second: where we are not, do we understand why? Third: what will we do about it before the next reporting deadline? These three questions define the entire intellectual architecture of this module. Every tool and framework we will cover is designed to help you answer one or more of them. And the goal is not compliance — it is control..
[Audio] This slide gives you the map of the entire module in a single diagram. The monitoring and control framework is a six-step cycle — and I want to emphasise the word cycle, because this is not a linear sequence you complete once. It is a loop you run at every monitoring interval, whether that is weekly, monthly, or quarterly. Step 1: Define Baselines. Before you can detect a deviation, you need to know what you were planning to do. This means setting the budgeted cost for each activity, the planned schedule for each milestone, and the KPI targets for each outcome — before implementation begins. Baselines defined after the fact are not baselines; they are rationalisations. Step 2: Collect Actuals. Record real expenditure, task completion percentages, and operational output data regularly. The monitoring system is only as reliable as the data feeding it. If actuals are collected monthly by some partners and quarterly by others, the consolidated view will always be lagged and unreliable. Step 3: Calculate Key Metrics. Compute the core financial metrics — burn rate, cost variance, schedule variance, and earned value. And — critically — link them to the operational KPIs that tell you what the spending is actually producing. We will cover each metric in detail in the slides ahead. Step 4: Detect Deviations. Compare actuals against the baseline. Identify variances that exceed tolerance thresholds. The threshold matters: not every deviation requires a response. A two percent cost variance in month one is noise. A twelve percent variance sustained over three months is a signal. Step 5: Diagnose Root Causes. Before taking any corrective action, understand why the deviation occurred. Is it a genuine cost pressure? A scope change? A delivery delay? An estimation error in the original plan? The corrective action depends entirely on the diagnosis. Applying the wrong fix to a correctly identified symptom is one of the most common and expensive mistakes in project management. Step 6: Take Corrective Action. Reallocate resources, revise the plan, or escalate to the steering committee or funder. And then return to step 2 — because monitoring never stops. At the bottom of the slide, three pillars anchor the whole framework. Financial discipline — tracking money in versus money planned. Operational linkage — connecting spend to output, which we will address in detail. And decision rhythm — regular structured review meetings with pre-agreed escalation triggers. Without the decision rhythm, even the best metrics become data that nobody acts on. Let's now go into the metrics themselves. There are four core ones, and understanding them is fundamental to everything else in this module..
[Audio] These four metrics form the irreducible core of financial monitoring in any complex project. Together they answer the fundamental question: are we spending at the right pace, for the expected outputs, efficiently? Let me take each one in turn. Burn Rate (BR) is the rate at which the project is consuming its budget over time. The formula is simple: total spend to date divided by months elapsed. Burn rate answers the most immediate question in any monitoring review — are we spending at the right pace? The critical insight is that the answer is not always obvious. Both significantly above and significantly below the planned rate are warning signals. Above plan may mean scope creep or unplanned procurement. Below plan may mean activities are not happening — which is a delivery risk, not a financial comfort. Earned Value (EV) is the most important and most misunderstood of the four. It is the budgeted cost of the work that has actually been completed — not what it cost, but what it was planned to cost. The formula: percentage of task completion multiplied by the budgeted cost of that task, summed across all tasks. EV is the anchor of the entire Earned Value Management framework — all the variances derive from it. Without calculating EV, you cannot meaningfully compare cost and schedule performance. Cost Variance (CV) tells you whether you are spending efficiently for the work that has been delivered. The formula is EV minus actual cost. A negative cost variance means you have spent more than the planned cost of the work you completed — you are overspending per unit of output. A positive variance means you have spent less than planned for the work delivered, which may be good news or may indicate that completion data is being over-reported. Schedule Variance (SV) tells you whether the project is delivering at the planned pace. The formula is EV minus planned value — where planned value is the budgeted cost of all work that should have been completed by today. A negative schedule variance means less work has been delivered than the plan required by this date. It expresses schedule performance in financial terms, which is what makes it directly comparable to cost performance. What makes these four metrics so powerful is not any single one of them in isolation — it is the pattern they reveal in combination. We will see exactly what that looks like in the next three slides..
[Audio] Burn rate is the starting point for any monitoring review because it is the fastest signal available. You do not need to calculate earned value or analyse work package completion to know whether the project is consuming its budget at an unusual pace — burn rate tells you that immediately, with minimal data. The formula bears repeating: monthly burn rate equals total spend to date divided by months elapsed. The planned rate equals total budget divided by project duration in months. Now, there is a critical nuance for Digital Agriculture that the slide makes explicit and that I want to emphasise: in Digital Agriculture, a flat planned burn rate is almost always wrong. Think about what a typical DA project looks like financially across a year. The early months involve setup, procurement lead times, and consortia formation — relatively low spend. Then the growing season arrives and everything happens at once: sensor deployment, field agronomists in multiple sites simultaneously, trial management, data collection infrastructure going live. Spend spikes dramatically. After the field season, it drops back to analysis and reporting activities. If you compare a February actual spend against a flat planned rate, you will almost always get a false underspend signal. If you compare an April or May actual against the same flat rate, you will get a false overspend signal. Neither is real. Both will cause you to waste time investigating non-problems, and more dangerously, to miss the real problems hidden within the noise. The discipline, therefore, is to build a seasonally-adjusted planned burn rate from the very beginning of the project. Map out, month by month, when you actually expect to spend the money — aligning with field seasons, procurement timelines, and partner activity peaks. That planned profile becomes your monitoring baseline. When the actual rate is significantly above the adjusted plan, your first question should be: is this legitimate acceleration, or is it scope creep or an unplanned procurement? If legitimate, confirm the scope is unchanged and document it. If scope creep, convene a formal review immediately — scope that grows without budget growing to match it is one of the most reliable predictors of project failure. When the actual rate is significantly below the adjusted plan, resist the temptation to read it as good news. Investigate. Is this a seasonal delay that is expected? Or is a partner disengaged, a key procurement stalled, or a field activity not starting as planned? If it is the latter, you have a delivery risk — and that risk compounds over time if left unaddressed..
[Audio] Burn rate tells you the pace of spending. But it cannot answer the question that matters most: are we getting the planned output for the money we are spending? For that, we need Earned Value Management. EVM integrates three data points — Planned Value, Earned Value, and Actual Cost — into a single performance view. Let me define each precisely, because the vocabulary matters here. Planned Value (PV) is the budgeted cost of all work scheduled to be completed by today, according to the original project plan. To calculate it, you sum the budgeted cost of every task that should have been completed by the current date. PV is your baseline — it represents the work the project was supposed to have delivered by now, in financial terms. Earned Value (EV) is the budgeted cost of the work that has actually been completed — and I want to stress this: it is not what the work cost, it is what the work was planned to cost. To calculate it, for each task you take the percentage of completion and multiply it by that task's budgeted cost, then sum across all tasks. EV is the anchor metric. It is what allows you to compare cost and schedule performance on the same scale. Actual Cost (AC) is the real expenditure incurred to date — what was actually spent on all the work that has been done. To calculate it, you sum all recorded costs: payroll charges from the ERP, invoices processed, procurement payments, travel expenses, subcontracting fees. From these three inputs, four performance metrics are derived — and they appear at the bottom of the slide. Cost Variance equals EV minus AC. A negative result means you are overspending relative to the work delivered. Schedule Variance equals EV minus PV. A negative result means you have delivered less work than planned by this date. The Cost Performance Index equals EV divided by AC — a value below 1.0 means you are getting less than one euro of work per euro spent. And the Schedule Performance Index equals EV divided by PV — a value below 1.0 means you are delivering at less than the planned pace..
[Audio] Cost Variance and the Cost Performance Index are the metrics that answer the efficiency question: for the work we have delivered, did we spend what we planned to spend — or more? The formulas are on the slide. CV equals EV minus AC. CPI equals EV divided by AC. A CPI below 1.0 is the critical signal — it means for every euro spent, less than one euro of planned work has been delivered. The work is costing more than budgeted. Let's work through the IoT sensor network example. Imagine a work package with a total budgeted cost of sixty thousand euros. By month nine, the plan was to have completed seventy-five percent of the installation — that is a Planned Value of forty-five thousand euros. In reality, only fifty-eight percent has been installed — an Earned Value of thirty-four thousand eight hundred euros. But the actual cost to reach that fifty-eight percent completion is forty-one thousand two hundred euros. CV equals thirty-four thousand eight hundred minus forty-one thousand two hundred, which gives a negative six thousand four hundred euros. The project is spending six thousand four hundred euros more than the planned cost of the work delivered. CPI equals thirty-four thousand eight hundred divided by forty-one thousand two hundred, which is 0.84. For every euro spent, only eighty-four cents of planned work has been delivered. Now here is where CPI becomes genuinely powerful as a management tool — because it allows you to project forward. If the CPI of 0.84 persists for the rest of the project, what will the final cost be? The Estimate at Completion is simply the total budget divided by the CPI: sixty thousand divided by 0.84 equals approximately seventy-one thousand euros. That is an eleven-thousand-euro overrun projected from a CPI reading at month nine — identified while there is still time to act, not at the final audit..
[Audio] Schedule Variance and SPI answer a different but equally important question: are we delivering work at the pace we planned? Before we go into the formulas, let me reinforce the definitions of PV and EV one more time, because they are the inputs. Planned Value, as we covered, is the budgeted cost of all work scheduled to be completed by today. Earned Value is the budgeted cost of all work actually completed. These are both expressed in financial terms — which is what makes schedule performance directly comparable to cost performance for the first time. SV equals EV minus PV. If Earned Value is less than Planned Value, the project has delivered less work than scheduled — and the variance is negative. SPI equals EV divided by PV. A value below 1.0 means the project is delivering at less than the planned pace. An important nuance: a negative Schedule Variance does not always mean the project is in trouble. In Digital Agriculture, many schedule variances are seasonal. If your field trial is scheduled to begin in April but it is currently February, you will always show a negative SV in the winter months — because the planned work has not yet happened, by design. The monitoring system must account for this by comparing against the seasonally-adjusted plan, not a linear projection. What makes SV genuinely diagnostic is combining it with CV. Four combinations are possible, and each tells a different story. Positive CV and positive SV: you are under budget and ahead of schedule. Excellent — but verify that completion data is accurate before celebrating. Negative CV and positive SV: you are ahead of schedule but over budget. You are going fast but spending too much to get there. Cost risk. Positive CV and negative SV: you are behind schedule but under budget. Delivery risk. The work is delayed, but costs are controlled. The priority is removing whatever blocker is slowing delivery. Negative CV and negative SV: you are behind schedule and over budget. This is the critical combination — both cost and delivery are at risk simultaneously, and it requires immediate escalation. These four quadrants are one of the most practically useful tools in the module. Keep them visible at every monitoring review. Now let's address the dimension that financial metrics alone can never fully capture..
[Audio] financial metrics tell you how money is being used. Operational KPIs tell you what that money is achieving. In Digital Agriculture, monitoring both — and understanding the link between them — is what separates compliance tracking from genuine performance management. Here is the critical risk: a project can be perfectly on budget, with a CPI of exactly 1.0 and an SPI of exactly 1.0, and still be failing. If the money is being spent correctly but the activities are not producing the expected outcomes — if the sensors are deployed but generating unreliable data, if the training is delivered but farmers are not adopting the tool — then financial compliance is a false signal of health. The table on this slide gives you five specific linkages to monitor in Digital Agriculture projects. Personnel cost per month linked to tasks completed per FTE per month. The combined signal reveals whether people are generating the planned outputs for their cost. The warning pattern: high spend combined with low task completion points to an efficiency problem or a systematic underestimation of effort. Equipment expenditure linked to sensors deployed and operational. Is the hardware investment translating into working infrastructure? The warning pattern: full equipment budget spent with less than seventy percent of sensors operational is a deployment problem — the money is gone but the capability is not there. Training budget utilised linked to farmers trained and actively using the tool. Is the training investment actually changing farmer behaviour? Notice the word actively — training completed is not the same as adoption achieved. The warning pattern: training complete but low sustained adoption points to a content or design problem, not a delivery problem. Field trial expenditure linked to data records collected and validated. Is trial spending generating usable scientific data? The warning pattern: high trial spend combined with poor data quality or low data volume points to a protocol problem or site-specific issues. Subcontracting costs linked to deliverables received. Is subcontracting generating the contracted outputs? The warning pattern: full subcontract paid with only partial deliverable received is a contract management issue that requires immediate escalation..
[Audio] This slide organises the operational KPIs specific to Digital Agriculture into three categories: productivity and agronomic, digital adoption, and energy and environmental. Together, these twelve metrics are the operational scorecard for any DA initiative. Let me walk through each category. Productivity and agronomic KPIs measure whether the digital technology is actually improving agricultural performance. Yield improvement — expressed as a percentage increase versus a control group or a pre-intervention baseline — is the most direct measure of agronomic impact. And I want to emphasise a design implication: you cannot calculate yield improvement retrospectively. You need a control group or a baseline measurement from day one, before any intervention. Projects that neglect this cannot demonstrate impact, regardless of how well their technology worked. Input cost reduction per hectare — savings on fertiliser, water, and pesticide — monetises the efficiency gain directly. This is often the most compelling metric for farmer adoption decisions and for investor and funder communication. Time savings per farm task, measured in hours, captures the labour efficiency gains from digital tools. Data coverage rate — the proportion of target hectares with active, validated monitoring — measures how completely the technology has been deployed. Digital adoption KPIs measure whether the technology is actually being used. Farmer adoption rate at three, six, and twelve months is the central metric — but I want to flag the word sustained. Registration is not adoption. A farmer who signed up once and never returned is not an adopter. Sustained use at twelve months is the true measure of adoption success, and it is the hardest to achieve. Platform engagement — average monthly active sessions per registered user — distinguishes genuine ongoing use from initial curiosity. Support requests per user is a proxy for usability: a high support rate means the tool is difficult to use and the onboarding is insufficient. And Net Promoter Score — the likelihood of farmers recommending the tool to peers — is a leading indicator of organic adoption expansion. An NPS above plus thirty in an agricultural context is strong. An NPS below zero means more farmers are actively discouraging adoption than encouraging it, which is a critical signal for any scaling ambition. Energy and environmental KPIs are increasingly required by funding bodies and investors. Water use efficiency per hectare is critical for irrigation technology. Energy intensity per tonne of agricultural output benchmarks the energy efficiency improvement from digital interventions. Carbon footprint change per hectare is increasingly mandatory in Horizon Europe reporting and in impact investment frameworks. And soil health indicators — organic matter percentage, nitrogen retention, compaction — are the long-term sustainability metrics, typically monitored annually. These KPIs need to be built into the project's data collection infrastructure from the start. Retrofitting measurement protocols after field activities have begun is significantly more expensive and less reliable. Now let's turn to the management discipline that ensures all of this monitoring data actually gets used..
[Audio] Having the right metrics and the right tools is necessary. Having a defined schedule, defined participants, defined outputs, and pre-agreed escalation triggers is what makes the difference between a monitoring system and a monitoring exercise. The table on the slide defines three monitoring frequencies. Let me be specific about what each one means in practice. Weekly monitoring is operational. WP leads, task leads, and field coordinators review task completion percentages, flag procurement issues, identify field delays, and report data quality problems. This does not need to be a formal meeting — a shared task-tracking tool updated consistently by each lead is sufficient. The key output is an issues log, with any blockers escalated immediately to the coordinator. The purpose is early detection. If a sensor shipment is delayed, the coordinator should know this in week one, not at the month-end financial review. Monthly monitoring is financial. The project coordinator and partner finance contacts review the financial tracker together. Burn rate is calculated against the seasonally-adjusted planned rate. CV, SV, CPI, and SPI are computed for each work package. The rolling forecast is updated. Operational KPIs are reviewed against targets. The cash-flow running total is checked. And if any work package shows an Estimate at Completion significantly above budget, a corrective action is planned. This monthly meeting should be short — forty-five to sixty minutes maximum — structured with the same agenda each month, and should end with a written list of action items assigned to specific people with specific deadlines. The discipline of the fixed agenda and the written action list is what converts the meeting from a status update into a management event. Quarterly monitoring is strategic. The full consortium — all WP leads and finance contacts — conducts a complete EVM analysis. The EVM S-curve is reviewed. Estimates at Completion are calculated across all work packages. All operational KPIs are assessed. The scenario plan is updated based on current evidence. And the risk of needing a formal budget amendment is evaluated before it becomes urgent..
[Audio] Monitoring only adds value when deviations trigger concrete decisions and actions. This slide maps the five most common monitoring signals in Digital Agriculture projects to their root causes and the corrective responses that are most effective. Signal one: burn rate significantly above plan. The most common root causes are accelerated field activity, which is not necessarily bad; unplanned procurement; scope expansion; or subcontractor invoice timing. Notice the first question the slide asks you to answer: identify the cost driver first. If the cause is legitimate acceleration — the team moved faster than planned and the spending reflects genuine progress — that is very different from scope creep. Confirm scope is unchanged, and document the acceleration. If it is scope creep, convene a formal scope review immediately. Deferred lower-priority activities are your first lever. Signal two: burn rate significantly below plan. This is the signal that teams most often misread as good news. The root causes in DA are typically: delayed field activity start, stalled procurement, a partner engagement gap, or seasonal activity not yet begun. The critical distinction is between a seasonal delay — which is expected and poses no delivery risk — and a structural disengagement, which is urgent. If a partner is consistently underspending because they are simply not doing the work, that is a consortium management problem that compounds over time. Signal three: negative cost variance in field activities. This typically points to underestimated field labour, higher-than-budgeted equipment costs — supply chain pressures are a frequent cause in DA — or more site visits required than planned. The corrective path: review remaining task estimates with the WP lead, identify whether there is available budget from underspent areas elsewhere, and consider reducing non-critical field scope if reallocation is insufficient. Signal four: negative schedule variance. In Digital Agriculture, the root causes here are distinctive. Regulatory barriers — drone flight approvals, data privacy consent processes — are among the most common and most underestimated causes of delivery delays. Weather disruption and farmer availability constraints are others. Technology readiness gaps are a third. The corrective approach: remove the blocker first. If it is regulatory, get legal advice and engage the relevant authority. If it is technical, escalate to the technology partner. If it is relational, bring in a facilitator. Only if the blocker cannot be removed within tolerance should you revise the work plan and notify funders of milestone date risk. Signal five: low farmer adoption despite training spend. This is the most emotionally difficult signal to act on, because it requires admitting that an investment that has already been made has not produced the intended outcome. The most common root causes are: the tool was not yet user-ready when training happened; training was too early relative to when farmers actually needed the tool; the materials were in the wrong language or format; or the peer network that drives adoption was not engaged. The corrective path: rapid user feedback sessions with a small group of farmers before investing more in the same approach. Partner with local agricultural advisors and farmer organisations. Simplify onboarding. And consider a demonstration farm approach — five early adopters who can show their peers that the tool works in conditions they recognise. These five corrective paths are your first-response toolkit. With the analytical framework and methods established, let's turn to the tools that make all of this operationally feasible..
[Audio] We have covered the theory , the monitoring and control framework, the core metrics, the KPI linkages, and the corrective action logic. Now we turn to the tools..
[Audio] ERP stands for Enterprise Resource Planning. These are integrated software platforms that manage financial, human resource, and operational data within an organisation. In the context of Digital Agriculture project monitoring, ERP systems are the institutional backbone of financial data — particularly for universities, national research institutes, and larger consortium partners. Understanding how ERP data flows into project monitoring is not optional for a consortium coordinator. The ERP is the authoritative source — not spreadsheets, not email attachments, not verbal estimates from WP leads at monthly calls. The numbers in the ERP are the numbers that count. What do ERP systems provide for project monitoring? Five things. Real-time personnel cost recording. Payroll integration posts staff costs automatically to the project code each month as salaries are processed. This means the coordinator always has an up-to-date view of personnel spend without waiting for partner reports. Purchase order and invoice tracking. From the moment an order is created through delivery and payment, the ERP maintains a complete, auditable procurement trail. This is exactly the documentation that grant auditors require. Cost centre accounting. Every financial transaction is assigned to a project code at the point of recording — not retrospectively. This means costs cannot be misallocated after the fact, and project-level financial statements can be generated at any time. Automated cost statements. On-demand generation of actual costs by category, partner, and time period — precisely the data needed to calculate burn rate, CV, SV, and EVM metrics. The ERP does not do the EVM calculation for you, but it provides the raw inputs. Budget control alerts. Configurable warnings when expenditure approaches or exceeds category allocations. These are your first automated monitoring signals. The most common ERP platforms iare SAP S/4HANA and Business One, Oracle NetSuite, Workday, and Microsoft Dynamics 365. Other alternatives tools for smaller organisations aer Xero and QuickBooks,cloud accounting platforms that include project tracking modules — they are adequate for most SME needs when used consistently. Harvest combined with Forecast is a time-tracking and project financial planning toolset. FreeAgent is designed for freelancers and micro-companies with good invoice and time integration. And a well-structured, disciplined Excel workbook can substitute for ERP when maintained rigorously. An Excel-based financial record that is updated inconsistently, passed between team members without version control, and contains manually entered numbers in formula cells is not a reliable data source. The tool is only as good as the discipline with which it is maintained..
[Audio] Cloud-based monitoring tools are what bridge the gap between institutional ERP systems (cloud or not) ,and the real-time, multi-partner visibility that Digital Agriculture initiatives require. They make financial and operational performance visible to all stakeholders simultaneously, from wherever they are. There are four categories of cloud-based tools, each serving a distinct function in the monitoring ecosystem. Financial dashboards — Power BI, Google Data Studio, Tableau, Qlik — connect to your Excel files or accounting systems and render the data as live, visual dashboards. Budget utilisation gauges, burn rate trend lines, EVM S-curves, financial heatmaps showing which work packages are on track and which are at risk. These are the management displays that allow a coordinator to scan the project's financial health in under five minutes without opening a single spreadsheet. Project management tools — Asana, Monday.com, Microsoft Project Online — track task completion and milestone status. This is where the operational data needed to calculate Earned Value actually lives. If your project management tool shows that Task 7 in WP3 is sixty percent complete, that completion percentage is what you multiply by the budgeted cost to calculate EV. Integrating project management tools with financial dashboards is what makes the KPI linkage we discussed earlier automatic rather than manual. Document and evidence management — SharePoint, Google Drive, Confluence — provides centralised storage for supporting documentation: timesheets, invoices, field notes, trial protocols, procurement records. Every document needed for audit, is searchable, accessible at any time. Audit preparation time drops from weeks to days when documentation is maintained continuously rather than assembled at deadline. Communication and alert tools — MS Teams, Slack, email automation — integrate monitoring alerts into daily workflows. When a budget threshold is reached, or a KPI falls below target, the relevant team member receives a notification immediately. This is how the monitoring system reaches people who would otherwise only see financial information at formal review meetings. The data flow diagram at the bottom of the slide is worth following carefully. The financial data originates in the ERP or accounting software. Each month, a cost extract is uploaded to a shared Excel file in the collaborative workspace. Power BI or another dashboard tool automatically pulls from that Excel file on a daily or weekly refresh cycle. The dashboard is published to a shared workspace accessible to all consortium members and funders with appropriate permissions. Automated alerts are triggered when configured thresholds are breached..
[Audio] Despite the sophistication of ERP systems and cloud dashboards, the Excel financial tracker remains the most universally deployable monitoring tool available to any project team. It does not require specialist software licences, works across all partner organisations regardless of their IT infrastructure, and — when structured and maintained consistently — provides all the data needed to calculate burn rate, cost variance, schedule variance, EVM metrics, and Estimates at Completion. The example that the slide refers to is worth walking through carefully, because it demonstrates exactly what a monitoring review looks like in practice. The tracker shows a project at month nine of a thirty-six-month timeline, with a total budget of five hundred and twenty-eight thousand euros, across four work packages. Management shows a CPI of 0.89 — at risk,. The budget is being overspent slightly relative to the work delivered. Not critical, but it needs monitoring and a conversation with the management team about where the excess cost is coming from. Technology Development shows a CPI of 1.06 — on track,. This work package is actually delivering slightly more than planned for the money spent. It is ahead on both cost and schedule. This is the bright spot in the portfolio, and the coordinator should understand why — because replicating the efficiency gains here in other work packages may be possible. Field Trials shows a CPI of 0.78 — critical. Both CV and SV are significantly negative. The Estimate at Completion is immediate and alarming: two hundred and ten thousand euros divided by 0.78 equals two hundred and sixty-nine thousand euros — a projected overrun of fifty-nine thousand euros on a single work package. Training presents a different pattern. CPI is 1.06 — cost-efficient. But SV is negative — the training programme is behind plan. This is a delivery risk rather than a financial risk. The money is being spent well, but less training has been delivered than scheduled. The corrective action is operational, not financial: find out why farmers are not being trained at the planned pace and remove the barriers. different work packages require different corrective responses, even within the same monthly review. WP3 needs a financial intervention. WP4 needs an operational one. WP2 is a model to learn from. The tracker makes all of this visible in a single table..
[Audio] This case study takes everything we have covered — the framework, the metrics, the KPI linkages, the corrective action mapping, the tools — and shows how they work together in a real Digital Agriculture project scenario. The context: a three-partner consortium deploying a precision irrigation and soil monitoring system across forty farms in two Mediterranean regions. Budget of three hundred and eighty thousand euros over twenty-four months. Key deliverables: sensor network operational, agronomic advisory platform deployed, two hundred farmers trained, and a fifteen percent reduction in water use documented. Month 6 Review — a financial problem detected early. The finding: burn rate is at one hundred and forty-two percent of plan. Digging deeper, the sensors are more expensive than budgeted — supply chain pressure drove up unit costs — and installation is taking forty percent more labour days than estimated. Both of these were foreseeable risks that sensitivity analysis at budget stage would have flagged as medium to high probability. CPI and SPI are both below 1.0, and the Estimate at Completion is already tracking significantly above budget. The action: rather than simply absorbing the overrun, the team takes three concrete steps. First, negotiate a fixed-price contract with a second sensor supplier to reduce unit cost risk going forward. Second, reallocate twelve thousand euros from the Q3 outreach budget — deferring, not cancelling, outreach activities — to cover the immediate hardware cost pressure. Third, revise the sensor installation methodology by partnering with a local agronomy service to share installation labour and bring down person-day costs. The outcome: the budget overrun is projected at eighteen thousand euros rather than forty-two thousand euros at the month six trajectory. That is a twenty-four-thousand-euro difference — generated entirely by detecting the problem at month six rather than at the periodic report. Month 12 Review — an operational problem that financial metrics alone would have missed. The finding: all financial metrics are back on track. CPI is 0.94, which is within normal performance range. A financial-only monitoring system would say: the project is fine. But the operational KPI dashboard tells a different story. Farmer adoption rate is at twenty-two percent versus a forty percent target at month twelve. The training programme has been completed. The platform has been deployed. The budget has been spent correctly. And yet sustained engagement is critically low, with an NPS score of plus five — essentially neutral. This is precisely the scenario where financial metrics become a false signal of health. The money was spent correctly. The outputs were delivered. But the impact is not materialising. The action: rapid user research with fifteen farmers — a small investment of time and money — reveals two specific barriers. Platform navigation is too complex for low-connectivity rural areas where internet is intermittent, and training materials were produced entirely in standard Greek, excluding two minority-language communities who are part of the pilot. The team redesigns the mobile interface for low-connectivity environments, translates key training modules, and deploys a demonstration farm approach, identifying five early adopters to serve as peer ambassadors. The outcome: adoption reaches fifty-one percent by month eighteen. NPS climbs to plus thirty-eight. And the peer ambassador model is replicated across both regions, generating an organic adoption dynamic that no amount of formal training had achieved. The lesson — stated at the bottom of the slide — is the one I want you to take from this entire module: month six monitoring detected a cost problem early enough to contain it. Month twelve monitoring detected an adoption problem that financial metrics alone would never have revealed — because the money had been spent correctly, but the impact was not materialising. Both required different corrective actions. But both required active monitoring..
[Audio] Lets bring into focus six principles you can carry directly into practice. First: monitor, don't just report. Monitoring is continuous, forward-looking, and action-oriented. Reporting is periodic, backward-looking, and documentary. The projects that manage financial and operational surprises most effectively are the ones that anticipated them — months before any report was due. Monitoring is what enables that anticipation. Second: know your core metrics. Burn rate, Earned Value, Cost Variance, and Schedule Variance are the irreducible core of financial monitoring. Together they answer the three questions this module opened with: are we spending at the right pace, for the expected outputs, efficiently? Without these four metrics, you are flying without instruments. Third: use Earned Value Management — even in simplified form. EVM sounds technically demanding, but in practice, calculating PV, EV, and AC for each work package takes thirty minutes per month. The Estimates at Completion and performance indices it generates are worth far more than the time invested. Start simple: one table per work package, updated monthly. The diagnostic power will speak for itself. Fourth: link financial to operational KPIs. This is the insight that separates compliance monitoring from performance management. Spending on budget does not mean delivering impact. For every major financial KPI in your monitoring framework, there must be a paired operational KPI that measures what the spending is actually producing. Define those pairs at project design — not at year two when adoption is disappointing and you wish you had baseline data. Fifth: build the monitoring rhythm. Weekly operational tracking, monthly financial review, quarterly consolidated EVM analysis. And — crucially — agree the escalation triggers before you need them. A CPI below 0.80 should automatically trigger leadership escalation. An operational KPI twenty-five percent below target should automatically trigger a root-cause review. Pre-agreed triggers make monitoring a system, not an exercise. Sixth: act early — always. A deviation detected in month six and corrected in month seven costs a fraction — financially and reputationally — of the same deviation discovered at the periodic report. The entire value of monitoring comes from the speed of the response it enables. Early detection is only valuable if it is paired with the organisational will to act on what is found. A closing thought: the goal of monitoring is not compliance. It is control,the ability to steer a project toward its intended outcomes while there is still time to act..
[Audio] Thank you for your engagement throughout this module..