"

16 Where to intervene?

Figure 13. Systemcraft. Where to Intervene. (Joyner, 2025)

Identifying the level and kind of intervention (surface or transformational) that is aligned with the goals of intervention.

Intervention in complex systems is a contentious issue in systems thinking, as many theories caution against prescriptive, step-by-step methodologies and the very concept of a ‘solution.’ Unlike conventional problem-solving approaches that assume problems can be defined clearly and solved definitively, systems thinking recognises that many issues are emergent, interdependent, and continuously evolving. Solutionism is common in organisations (Morozov, 2013) and refers to the tendency to frame every issue as a problem that can be solved with a technical or managerial fix. We need to acknowledge that tendency in ourselves and our own organisations, if we are to explore more effective ways forward.

Rittel and Webber (1973) in their seminal paper, Dilemmas in a General Theory of Planning, lay the foundation for understanding why traditional approaches to problem-solving, including systems interventions, struggle with wicked problems, a concept they introduced.

Their critique of traditional planning and problem-solving approaches highlights key limitations that directly impact how systems interventions should be designed and executed. The core arguments relevant to systems intervention include that, unlike tame problems (e.g., engineering problems), where there is a definitive right or wrong answer, wicked problems have no optimal solution, only better or worse interventions. Also, unlike scientific experiments, wicked problems do not have a clear point at which they are solved. Therefore, systems interventions must focus on managing rather than solving problems, emphasising ongoing learning and adaptation.

However, we intervene in organisational and other social systems regularly, with examples including changing salary and reward systems, attempts at cultural change, performance review mechanisms, restructures, policy reforms and interest rate rises. Given that, we can be guided to consider whether our intended intervention matches the ambition of our systems change goal. As Rittel and Weber (1973) suggest, while perfect solutions may not exist, some interventions are more effective than others.

Meadows and the idea of the leverage point.

Donella Meadows’ Leverage Points – Places to Intervene in a System is a classic tool of systems thinking.  Meadows (2008, 145-147) herself acknowledges the problems of the idea of a leverage point.

The idea of leverage points is not unique to systems analysis – it’s embedded in legend: the silver bullet, the trimtab, the miracle cure, the secret passage, the magic password, the single hero who turns the tide of history, the nearly effortless way to cut through or leap over huge obstacles.

I have come up with no quick or easy formulas for finding leverage points in complex and dynamic systems… I offer this list to you with much humility and wanting to leave room for its evolution. What bubbled up in me that day was distilled from decades of rigorous analysis of many kinds of systems done by many smart people. But complex systems, are, well, complex. It’s dangerous to generalize about them. What you read here is still a work in progress; it’s not a recipe for finding leverage points. Rather, it’s an invitation to think more broadly about system change. 

You can find a short explainer on Meadow’s Leverage Points model here.
You can also go straight to the source, by going to the Meadows’ (2008) reading.

Tool – Meadows (2008) Leverage Points. 

PLACES TO INTERVENE IN A SYSTEM

(in increasing order of effectiveness)

  1. Constants, Parameters, and Numbers – Adjusting things like taxes, subsidies, and standards.
  2. The Sizes of Buffers and Stocks – Changing reserves or inventories in a system.
  3. The Structure of Material Stocks and Flows – Altering physical infrastructure and logistics.
  4. The Lengths of Delays – Modifying the time between action and response.
  5. The Strength of Negative Feedback Loops – Enhancing stabilising mechanisms.
  6. The Gain of Positive Feedback Loops – Strengthening reinforcing effects.
  7. The Structure of Information Flows – Improving who gets access to information and how.
  8. The Rules of the System – Changing laws, incentives, or operating procedures.
  9. The Power to Add, Change, or Evolve the System Structure – Shifting who makes the rules.
  10. The Goals of the System – Redefining the fundamental purpose of the system.
  11. The Mindset or Paradigm Out of Which the System Arises – Changing the underlying worldview.
  12. The Power to Transcend Paradigms – Seeing beyond existing belief systems to create new possibilities.

We will use the Robodebt case to illustrate a modified set of these leverage points and provide an overall critique of the intervention.

Examples

Robodebt: A Systems Intervention Analysis Using Meadows’ Leverage Points

The Robodebt scheme, an automated debt recovery program implemented by the Australian government from 2016 to 2020, serves as a powerful example of a failed systems intervention (Morton, 2024). It aimed to improve efficiency in welfare compliance by using automated data matching to identify and recover overpayments to Centrelink recipients. However, the program resulted in significant human and legal consequences, ultimately leading to a Royal Commission that deemed it unlawful and unjust.

To analyse Robodebt as a system intervention, we apply Donella Meadows’ (2008) framework of leverage points, which identifies places to intervene in a system with increasing levels of systemic impact.

1. Constants, Parameters, and Numbers

(Least impactful leverage point)

Robodebt was framed as a simple efficiency improvement: automating the calculation of debts by averaging annual ATO income data to detect discrepancies in Centrelink payments. The parameters of the system—such as debt thresholds, repayment plans, and penalty structures—were adjusted to increase recovery rates. However, these numerical tweaks failed to address underlying problems, such as the system’s flawed logic of assuming stable income throughout the year, leading to false debts.

System Impact:

The focus on financial targets rather than accuracy led to wrongful debts being issued, damaging public trust.

2. Buffer Sizes and Reserves

The program reduced the buffer of human oversight, replacing caseworkers with an automated system that did not account for individual circumstances. Welfare recipients were required to provide evidence to contest debts, reversing the traditional burden of proof. This eroded a critical buffer that had previously allowed for case-by-case assessment, disproportionately harming vulnerable individuals.

System Impact:

The removal of human intervention accelerated errors and amplified harm, reducing the system’s capacity to self-correct.

3. Feedback Loops

The scheme disrupted the normal balancing feedback loops designed to correct errors in government administration. Traditional welfare debt systems relied on review mechanisms where recipients could contest calculations with the support of caseworkers. However, Robodebt weakened this corrective feedback by shifting to automated decision-making, requiring citizens to challenge debts through a bureaucratic process with a high burden of proof.

System Impact:

Negative feedback from affected citizens was slow to reach decision-makers, as individual complaints were dismissed as isolated cases rather than systemic failures.

4. Information Flows

Meadows highlights that who has access to information can be a major leverage point. In Robodebt, affected individuals received debt notices without transparent explanations of how the debt was calculated. Many did not realise that the debts were generated through income averaging rather than concrete evidence of overpayment.

Additionally, internal government warnings—such as legal advice questioning the scheme’s validity—were ignored or not escalated effectively.

System Impact:

Poor transparency meant that affected individuals and legal advocates struggled to challenge the scheme, while key government actors failed to act on early warnings of its flaws.

5. Rules of the System

A key structural change in Robodebt was the reversal of the burden of proof; where citizens had to disprove their debt rather than the government proving its validity. This rule change made it harder for individuals to contest debts and enabled the system to function without human oversight.

System Impact:

The rule shift fundamentally altered power dynamics, favouring automated decisions over human rights protections.

6. Self-Organisation (Ability to Adapt the System)

A resilient system allows for self-organisation, meaning it can evolve in response to problems. In Robodebt, rigid bureaucratic structures meant that once the system was in place, it became difficult to halt. There was institutional resistance to adapting, even when significant public criticism and legal challenges emerged.

System Impact:

Instead of adapting, the government doubled down on the scheme until legal action forced its termination.

7. Goals of the System

The stated goal of Robodebt was to improve debt recovery efficiency. However, the implicit goal was to increase revenue collection from vulnerable populations (Morton, 2024) rather than ensure the integrity of welfare payments. This focus on budget savings over fairness shaped system behaviours, leading to a disregard for ethical and legal considerations.

System Impact:

A goal misaligned with public interest meant that the system prioritised financial outcomes over citizen well-being.

8. Paradigms (Deepest Leverage Point)

The underlying paradigm driving Robodebt was the belief that welfare recipients are likely to be fraudulent and that automated enforcement is superior to human judgment. This neoliberal framing of welfare policy as a cost burden rather than a social safety net justified the aggressive approach.

System Impact:

The system reinforced harmful stereotypes about welfare recipients, leading to punitive policies that eroded trust in government services.

9. Transcending the Paradigm

(Most powerful leverage point, but rarely used)

A transformative shift would involve redefining welfare policy, not as a means of policing citizens but as a system designed to support social and economic participation. An alternative paradigm could prioritise citizen dignity, fair process, and human-centred governance, challenging the adversarial relationship between welfare administration and recipients.

System Impact:

Without shifting the paradigm, future interventions risk repeating similar failures under different guises.

Conclusion: Why Robodebt Failed as a Systems Intervention

By applying Meadows’ (2008) leverage points, we can see that Robodebt focused on low-impact system interventions (automation and efficiency metrics) while ignoring higher-leverage points like feedback loops, goals, and paradigms. The program ultimately collapsed because it disrupted balancing mechanisms, ignored ethical considerations, and was driven by a problematic view of welfare recipients (Holmes, 2023).

A more effective systems approach would have involved:

  • Restoring feedback loops (allowing caseworkers and legal reviews to intervene).
  • Ensuring information transparency (explaining debt calculations clearly).
  • Aligning goals with fairness rather than revenue recovery.
  • Challenging the punitive welfare paradigm.

This case illustrates the risks of solutionism (Morozov, 2013) in complex systems, where technological fixes are applied without understanding deeper systemic dynamics. It serves as a cautionary tale for future government automation projects, demonstrating that interventions must be designed with systemic insight, rather than a narrow focus on efficiency.

 

Key Takeaways

  1. Intervening in Complex Systems Requires More than Technical Fixes
    Systems thinking challenges the assumption that problems have definitive solutions.
  2. Meadows’ Leverage Points Provide a Framework for Systemic Change
    Donella Meadows’ (2008) model highlights different intervention points in a system, from low-leverage changes (e.g., adjusting parameters like taxes or incentives) to high-leverage changes (e.g., shifting system goals or underlying paradigms). Effective system interventions require identifying whether a change operates at the surface level or if it has the potential for deeper, transformational impact.
  3. Successful Systems Change Requires Aligning Goals, Feedback, and Paradigms
    The failure of Robodebt demonstrates that interventions must go beyond technical efficiency and consider ethical and systemic implications. Effective systems interventions should restore feedback loops, improve transparency, align system goals with fairness and public interest, and challenge harmful paradigms that drive policy decisions. Without these elements, interventions risk reinforcing systemic failures rather than addressing them.

References

  1. Holmes, C. (2023). Report of the Royal Commission into the Robodebt Scheme. Royal Commission into the Robodebt Scheme.
  2. Joyner, K. (2025) Systems thinking for leaders. A practical guide to engaging with complex problems. Queensland University of Technology. https://qut.pressbooks.pub/systemcraft-systems-thinking/
  3. Meadows, & Wright, D. (2008). Thinking in systems: A primer. Chelsea Green Pub
  4. Morozov, E. (2013). To Save Everything, Click Here: The Folly of Technological Solutionism. PublicAffairs.
  5. Morton, R. (2024). Mean Streak. Harper Collins Australia.
  6. Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155–169

 

definition

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Systems Thinking for Leaders Copyright © by Queensland University of Technology is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.