• UPSC IAS Exam Pattern
  • UPSC IAS Prelims
  • UPSC IAS Mains
  • UPSC IAS Interview
  • UPSC IAS Optionals
  • UPSC Notification
  • UPSC Eligibility Criteria
  • UPSC Online
  • UPSC Admit Card
  • UPSC Results
  • UPSC Cut-Off
  • UPSC Calendar
  • Documents Required for UPSC IAS Exam
  • UPSC IAS Prelims Syllabus
  • General Studies 1
  • General Studies 2
  • General Studies 3
  • General Studies 4
  • UPSC IAS Interview Syllabus
  • UPSC IAS Optional Syllabus

forest are best case study

Forests are the best case studies for economic excellence – UPSC CSE PYQ 2022

Essay Previous Year Paper 2022- Click Here

forest are best case study

Forests are intricate ecosystems that have evolved over millions of years, thriving on principles that enable their growth, adaptation, and delivery of benefits to the environment and society.

These virtues can serve as a valuable guide for economic systems to flourish, thrive, and yield positive outcomes. This essay explores how economic systems can embody and operate on the virtues that forests possess, along with a few case studies that demonstrate the excellence an economy can achieve by emulating forests

Firstly, forests are characterized by their resilience—the ability to recover and bounce back from challenging situations . For example, forests can withstand catastrophic events such as forest fires, droughts, and floods. Similarly, in the economy, we need to diversify supply chains to increase resilience to supply shocks caused by political shifts or natural disasters. The COVID-19 pandemic has highlighted the importance of robust and resilient programs that play a critical role in economic adaptation and survival during crises and beyond.

Secondly, diversity is another crucial characteristic of forests, encompassing genetic and species diversity . Forests consist of numerous species that interact with each other in complex ways, creating a dynamic ecosystem capable of withstanding external pressures. Higher diversity fosters a healthier natural ecosystem, making forests more productive, stable, and sustainable . Similarly, in the economy, diversity is essential. In rural areas, the focus should shift beyond agriculture to allied areas like fishery, agroforestry, and apiculture.

In urban and suburban areas, emphasis should be placed on small and medium enterprises in addition to heavy industries. Thus, a well-rounded economy should encompass a balanced mix of primary, secondary, and tertiary sectors to ensure sustainability, stability, and productivity .

Another principle of forests is mutual symbiosis , where all organisms in an ecosystem depend on each other directly or indirectly. For instance, bees depend on nectar from flowers, while flowers rely on bees for pollination, benefiting the entire system. Similarly, creating symbiosis in the economy can be advantageous.

For example, establishing agro-processing industries in agricultural areas benefits both farmers by increasing productivity and profit, while sectors gain access to raw materials at a lower cost. Connecting ancillary micro, small, and medium enterprises with larger units is also beneficial. MSMEs require high demand for survivability, while large units acquire the necessary raw materials from them.

Forests demonstrate the principle of adoption , where species adapt to changing temperatures and rainfall patterns. Similarly, in the economy, agricultural methods should adapt to climate change by employing techniques such as micro-irrigation or dryland farming in regions with low rainfall. Manufacturing industries should also upgrade to modern technologies like artificial intelligence and the Internet of Things to address changing demands and better align with needs.

Self-regulation is another important principle of forests. Forest ecosystems possess built-in mechanisms that maintain balance and stability. Predators, for example, help control prey populations, preventing overgrazing and ecological damage. Similarly, economic systems can learn from this principle by developing self-regulating mechanisms that prevent excesses and imbalances . The Reserve Bank of India (RBI) serves as an example of such a mechanism. The RBI regulates the Indian banking system, ensuring banks operate within ethical and financial standards. Its monetary policy framework aims to maintain price stability while supporting economic growth, and preventing harm to society and the economy.

Forests exhibit a long-term perspective, taking decades or even centuries to grow and adapt to changing conditions.

Economic systems can adopt a similar perspective by considering the needs of future generations. Norway’s sovereign wealth fund serves as an example of economic excellence achieved through a long-term perspective. Designed to withstand fluctuations in global financial markets, the fund has provided stable revenue for the country’s social welfare programs, allowing Norway to weather financial crises successfully.

Several case studies illustrate the importance of aligning economic models with local ecosystems. Just as a species cannot be forced to live in different types of forests, an economic model cannot be applied uniformly across all regions. Instead, localized approaches, such as promoting locally grown or unique crops, ensure sustainability and a stable economy.

Furthermore, invasive species can adversely affect domestic forests, just as unregulated foreign companies can impact domestic economies. Recognizing complementary niches is crucial, as species with identical niches cannot coexist. Similarly, businesses must identify their strengths and weaknesses to develop niche strategies effectively.

Mangroves, acting as buffer zones, protect territorial landmasses from disasters like cyclones. Similarly, countries like India, in the face of a changing global scenario and widespread globalization, need buffers to sustain their markets and navigate unforeseen events such as the COVID-19 pandemic.

Lastly, it is essential to avoid exploiting forests and natural resources beyond their regenerative capacities, as it can lead to the collapse of the forest ecosystem or the economy in the long run . In essence, sustainable development promotes economic growth with justice and environmental conservation, which is urgently needed on both national and global scales. Forests and economies are interconnected, and their elements coexist and maintain balance. The goal is to maintain an economically viable and ecologically sustainable society by embracing the virtues of forests, ultimately achieving the highest forms of economic excellence.

Table of Contents

Frequently Asked Questions (FAQs)

Question: how can forests serve as case studies for economic excellence.

Answer: Forests contribute significantly to economic excellence through various avenues such as timber production, non-timber forest products, ecotourism, and carbon sequestration. Sustainable forest management practices can ensure long-term economic benefits while preserving ecological balance.

Question: What role do forests play in supporting rural economies and livelihoods?

Answer: Forests play a crucial role in supporting rural economies by providing employment opportunities in activities like forestry, agroforestry, and non-timber forest product collection. Additionally, forests contribute to the livelihoods of local communities through ecosystem services like water regulation, soil fertility, and climate regulation.

Question: How can the conservation of forests contribute to economic sustainability?

Answer: Forest conservation is essential for maintaining biodiversity, regulating climate, and ensuring sustainable resource use. Preserving forests helps safeguard ecosystem services that have direct economic implications, such as clean water supply, pollination of crops, and mitigating climate change impacts. Long-term economic sustainability is linked to the responsible management and conservation of forests.

In case you still have your doubts, contact us on 9811333901.  

For UPSC Prelims Resources, Click here

For Daily Updates and Study Material:

Join our Telegram Channel – Edukemy for IAS

  • 1. Learn through Videos – here
  • 2. Be Exam Ready by Practicing Daily MCQs – here
  • 3. Daily Newsletter – Get all your Current Affairs Covered – here
  • 4. Mains Answer Writing Practice – here

Visit our YouTube Channel – here

  • A society that has more justice is a society that needs less charity.
  • Inspiration from creativity springs from the effort to look for the magical in the mundane
  • UPSC Essay Notes – Interesting Movies & Literature – THE SOUND OF MUSIC
  • You cannot step twice in the same river – UPSC CSE PYQ 2022

' src=

Edukemy Team

Upsc essay notes – important personalities – mahatma gandhi, upsc essay notes – interesting movies & literature – the..., upsc essay notes – important personalities – j.k. rowling, not all who wander are lost – upsc cse pyq..., upsc essay notes – interesting movies & literature – coco, upsc essay notes – important personalities – a.p.j. abdul kalam, upsc essay notes – important personalities – swami vivekananda, upsc essay notes – famous book summaries – the third..., upsc essay 2019 question paper – edukemy, upsc essay notes – famous book summaries – sapiens, leave a comment cancel reply.

Save my name, email, and website in this browser for the next time I comment.

Our website uses cookies to improve your experience. By using our services, you agree to our use of cookies Got it

Keep me signed in until I sign out

Forgot your password?

A new password will be emailed to you.

Have received a new password? Login here

forest are best case study

  • Publications

Investing in Forests: The Business Case

forest are best case study

Forest destruction and degradation is accelerating the severe climate and nature crises facing the world. Halting business practices that contribute to this degradation is a vital priority and investment in forest conservation and restoration is urgently needed. Investing in forests fulfils multiple corporate priorities. Beyond contributing to tackling the nature and climate crises, it has potential to sustain business resilience, embody values-led leadership and boost profitability and growth – the economic value of forests is vast, with one estimate suggesting the total value of intact forests and their ecosystem services to be as much as $150 trillion, around double the value of global stock markets.

World Economic Forum reports may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License , and in accordance with our Terms of Use .

Further reading All related content

forest are best case study

Investing in trees: global companies are protecting and restoring forests

Over 100 companies have committed to investing in trees by conserving, restoring, and growing over 12 billion trees in over 100 countries.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 13 July 2022

The 2019–2020 Australian forest fires are a harbinger of decreased prescribed burning effectiveness under rising extreme conditions

  • Hamish Clarke 1 , 2 , 3 , 4 ,
  • Brett Cirulis 4 ,
  • Trent Penman 4 ,
  • Owen Price 1 , 2 ,
  • Matthias M. Boer 2 , 3 &
  • Ross Bradstock 1 , 2 , 3 , 5  

Scientific Reports volume  12 , Article number:  11871 ( 2022 ) Cite this article

12k Accesses

19 Citations

31 Altmetric

Metrics details

  • Environmental sciences
  • Natural hazards

There is an imperative for fire agencies to quantify the potential for prescribed burning to mitigate risk to life, property and environmental values while facing changing climates. The 2019–2020 Black Summer fires in eastern Australia raised questions about the effectiveness of prescribed burning in mitigating risk under unprecedented fire conditions. We performed a simulation experiment to test the effects of different rates of prescribed burning treatment on risks posed by wildfire to life, property and infrastructure. In four forested case study landscapes, we found that the risks posed by wildfire were substantially higher under the fire weather conditions of the 2019–2020 season, compared to the full range of long-term historic weather conditions. For area burnt and house loss, the 2019–2020 conditions resulted in more than a doubling of residual risk across the four landscapes, regardless of treatment rate (mean increase of 230%, range 164–360%). Fire managers must prepare for a higher level of residual risk as climate change increases the likelihood of similar or even more dangerous fire seasons.

Similar content being viewed by others

forest are best case study

The carbon dioxide removal gap

forest are best case study

Frequent disturbances enhanced the resilience of past human populations

forest are best case study

A unifying modelling of multiple land degradation pathways in Europe

Introduction.

Intrinsic to the earth system for hundreds of millions of years, wildfires are increasingly interacting with humans and the things we value 1 , 2 , 3 . Mega-fires in recent years have caused loss of life and property and widespread environmental and economic impacts in many countries, challenging society’s ability to respond effectively 4 , 5 , 6 . Climate change has already caused changes in some fire regimes, with greater changes projected throughout this century 7 , 8 , 9 . There is a broad network of anthropogenic influences on fire likelihood, exposure and vulnerability including land-use planning, building construction and design, insurance, household and community actions, Indigenous cultural land management, ecosystem management, and research and development. Within this network, fire management agencies play a critical role in wildfire risk mitigation, although our understanding of the interactions between, and relative contributions of, these varied factors towards risk mitigation remains limited. Addressing these gaps is required to support the development and implementation of cost-effective risk management strategies 10 .

Prescribed burning is commonly used in contemporary fire management to alter fuels, with the intention of mitigating risks posed by wildfires to assets. This involves the controlled application of fire in order to modify fuel properties and increase the likelihood of suppressing any wildfires that subsequently occur in the area of the burn 11 , 12 , 13 . Although the effects and effectiveness of prescribed burning have come under intense scientific scrutiny 14 , major knowledge gaps remain in the design of locally tailored, cost-effective treatment strategies that aim to optimise risk mitigation across a range of management values 15 . Crucially, these values may sometimes be in conflict e.g. smoke health impacts from prescribed fire and wildfire 16 or biodiversity conservation and asset protection 17 , necessitating methods for making trade-offs explicit 18 .

The 2019–2020 fires in south-eastern Australia resulted in 33 direct deaths, over 400 smoke-related premature deaths, the loss of over 3000 houses and new records for high severity fire extent and the proportion of area burnt for any forest biome globally 4 , 19 , 20 , 21 . These fires were an important opportunity to test the risk mitigation effects of prescribed burning. One empirical study found that about half the prescribed fires examined resulted in a significant decrease in fire severity, with effects greater for more recent burns and weaker for older burns 22 . Two other empirical studies 6 , 23 found decreases in the probability of high severity fire and house loss after past fire (either prescribed fire or wildfire), but also that this effect was significantly weakened under extreme fire weather conditions, consistent with prior research 24 , 25 . Large ensemble fire behaviour modelling can complement these empirical studies by exploring far more variation in weather conditions, treatment strategies and ignition location than would be possible from the historical record 26 , 27 , 28 . Simulation modelling facilitates estimates of residual risk: the percentage of maximum bushfire risk remaining, in a given area, following a particular fire management scenario, with maximum typically based on a control scenario with no prescribed burning treatment 29 . Simulation modelling also enables tracking of the trajectory of risk in the aftermath of seasons such as the 2019–2020 one, where very large burned areas might be expected to have reduced landscape fuel loads and hence residual risk.

Here we perform a simulation experiment on the effects of different rates of prescribed burning treatment on area burnt and the risks posed by wildfire to multiple values. We consider life, property and infrastructure across four case study landscapes (Fig. 1 ). In particular, we asked:

How much risk mitigation does prescribed burning provide in the weather conditions of 2019-20 compared to average fire season weather distributions, based on long-term records?

How much subsequent risk reduction did the Black Summer fires provide?

Over what time period will risk reduction be measurable?

figure 1

Fire behaviour simulations were carried out for four case study landscapes in south-eastern Australia: Casino, Gloucester, Blue Mountains and Jervis Bay. See Table 1 and Study Area in the Methods section for more information. This figure was generated using ArcGIS version 10.8 ( https://www.esri.com/en-us/home ).

The effect of 2019-20 fire weather conditions on risk mitigation from prescribed burning

Fire weather conditions during the 2019–2020 season were markedly different to preceding years (Fig. 2 ). In all four case study landscapes there were fewer Low-Moderate days (Forest Fire Danger Index (FFDI): 0–12) and often considerably more High, Very High and Severe days (FFDI: 12–74). Only in the Jervis Bay landscape were there substantially more Extreme days (FFDI: 75–99) during the 2019–2020 season, while there were no Catastrophic days (FFDI ≥ 100) in any of the landscapes during 2019–2020.

figure 2

Relative frequency of FFDI categories from half-hourly weather station data during the long-term record (1995–2014 for Casino, 1991–2014 for Gloucester and Blue Mountains, 2000–2014 for Jervis Bay) and during the 2019–2020 season.

The 2019–2020 weather conditions strongly increased the residual risk of area burnt by wildfire and house loss due to wildfire (Figs. 3 , 4 ). For any given treatment rate, the residual risk under 2019–2020 weather conditions far exceeded control conditions (i.e. conditions based on long-term historic weather). For area burnt there was a mean 220% increase in residual risk (range 170–351%), while for house loss the mean increase in residual risk was 244% (range 164–360%). Only under very high rates of treatment was prescribed burning under 2019–2020 conditions able to achieve a residual risk below that of zero treatment in the control scenario, and only for house loss in the Blue Mountains (Fig. 4 ). Elsewhere even the highest rates of treatment (well above rates achieved historically) resulted in a residual risk above that of zero treatment in the control scenario.

figure 3

Residual risk trajectory of area burnt by wildfire in Casino, Gloucester, Blue Mountains and Jervis Bay. Risk is relative to a scenario with no prescribed burning and long-term weather (the 100% level on the y-axis). Markers represent different annual rates of treatment, colours represent different weather conditions (blue = control i.e. long-term, orange = 2019–2020 fire season).

figure 4

Residual risk trajectory of houses lost due to wildfire in Casino, Gloucester, Blue Mountains and Jervis Bay. Risk is relative to a scenario with no prescribed burning and long-term weather (the 100% level on the y-axis). Markers represent different annual rates of treatment, colours represent different weather conditions (blue = control i.e. long-term, orange = 2019–2020 fire season).

Prescribed burning resulted in a reduction in residual risk in all landscapes regardless of weather conditions, even though in almost all cases the risk remained higher than for zero treatment in the control scenario (see gradient of markers in Figs. 3 , 4 ). The effect of increasing treatment was much stronger in the Blue Mountains, with a minimum residual risk of area burnt by wildfire under long-term weather conditions of 35%, and a minimum residual risk of house loss of 22%. In the other three landscapes the minimum residual risk was 89% for area burnt and 77% for house loss. The marginal effect of prescribed burning (i.e. the rate of change in risk mitigation with incremental changes in treatment rate) was greater under the extreme 2019–2020 weather conditions, even though the residual risk was much higher as described above. Results for life loss and infrastructure damage were similar (Supplementary Figures 1 –3).

Risk in the aftermath of 2019–2020 fire season

The estimated fuel load reductions due to the 2019–2020 fire season were predicted to cause widespread short-term reductions in residual risk to area burnt by wildfire and house loss, regardless of treatment level (Figs. 5 , 6 ). The potential area burnt by wildfire in 2021 was predicted to be at 30–80% of control (i.e. pre-2019–2020 levels) depending on landscape (Fig. 5 , circles). The predicted reduction in area burnt was greatest in Jervis Bay and Gloucester, which experienced the greatest and second greatest proportion burnt during the 2019–2020 season respectively (Table 1 ). By 2025, the residual risk of area burnt by wildfire climbed to 50–90% of control levels across the four study areas (Fig. 6 ). Results are similar for house loss (Fig. 6 ) i.e. the reductions in future wildfire risk due to the 2019–2020 season are partial and temporary, with residual risk actually exceeding control levels in the Blue Mountains by 2025. The re-accumulation of fuel over time is predicted to lead to greater risk mitigation from prescribed burning by 2025 than by 2021 (compare the gradients of the crosses and the circles in Figs. 5 , 6 ). As with the previous analysis, results for life loss and infrastructure damage were similar (Supplementary Figures 4 -6).

figure 5

Future residual risk trajectory of area burnt by wildfire in the Casino, Gloucester, Blue Mountains and Jervis Bay case study areas. Risk is relative to a control scenario with pre-2019–2020 fuel load and no prescribed burning (the 100% level on the y-axis, indicated by line). Markers represent different annual treatment rates, colour indicates time period (blue = 2021 i.e. two years after 2019–2020 fire season, orange = 2025 i.e. six years after 2019–2020 fire season). In Jervis Bay the markers for 2, 3 and 5% p.a. treatment reflect edge treatment rates, with landscape treatment capped at 1% p.a. due to the very large area burnt during the 2019–2020 season (81% of the study area).

figure 6

Future residual risk trajectory of houses lost due to wildfire in the Casino, Gloucester, Blue Mountains and Jervis Bay case study areas. Risk is relative to a control scenario with pre-2019–2020 fuel load and no prescribed burning (the 100% level on the y-axis, indicated by line). Markers represent different annual treatment rates, colour indicates time period (blue = 2021 i.e. two years after 2019–2020 fire season, orange = 2025 i.e. six years after 2019–2020 fire season). In Jervis Bay the markers for 2, 3 and 5% p.a. treatment reflect edge treatment rates, with landscape treatment capped at 1% p.a. due to the very large area burnt during the 2019–2020 season (81% of the study area).

Weather conditions during the 2019–2020 Australian fire season were a substantial risk multiplier compared to long-term weather conditions. The relative risks due to wildfire, quantified in terms of area burnt or house loss, doubled in three of four forested landscapes and more than tripled in the other. While prescribed burning partially mitigated these risks, the effect size was typically dwarfed by the effect of extreme weather conditions. In most cases zero treatment under long-term historic weather conditions yielded a lower residual risk than even the highest prescribed burning rates when combined with the 2019–2020 fire weather conditions. We also found that wildfire risk was likely to be reduced in the aftermath of the 2019–2020 fires, based on the implied fuel reduction associated with the unprecedented area burnt during the 2019–2020 season. However, the residual risk was still substantial in some areas and was predicted to rise steadily in the coming years, regardless of prescribed burning treatment rates.

Prescribed burning can mitigate a range of risks posed by wildfire, however residual risk can be substantial and is likely to increase strongly during severe fire weather conditions 6 , 24 . We found that the risk mitigation available from prescribed burning varies considerably depending on where it is carried out and which management values are being targeted, consistent with previous modelling studies that suggest there is no ‘one size fits all’ solution to prescribed burning treatment 15 , 16 . Of the factors influencing regional variation in prescribed burning effectiveness, the configuration of assets and the type, amount and condition of native vegetation are likely to be important. The Blue Mountains landscape, where area burnt by wildfire responded most strongly to treatment, has a relatively high proportion of native vegetation compared to the other landscapes, particularly Casino and Gloucester which are mostly cleared. The Blue Mountains also has an unusual combination of a high population concentrated in a linear strip of settlements surrounded by forest, which may contribute to greater returns on treatment (Fig. 1 ). Future research could systematically investigate the relationship between risk mitigation and properties of key variables such as asset distribution, vegetation and burn blocks for an expanded selection of landscapes. Although residual risk was greatly reduced in some areas after the 2019–2020 fire season, it remained substantial in other areas and was generally predicted to rise rapidly with fuel re-accumulation over the following five years. More work is needed to understand potential feedbacks between increasing fire activity, fuel accumulation and subsequent fire activity 8 .

Our conclusions are dependent on a number of assumptions associated with our fire behaviour simulation approach, including the foundational premise that fire spread is a function of fire weather, fuel load and factors such as topography. Fire behaviour simulators built on these assumptions have known biases and perform better when these are addressed, although their tendency to underestimate extreme fire behaviour suggests our results may be conservative 30 , 31 , 32 . The approach also assumes that both wildfires and prescribed burns consume equivalent quantities of fuel and that this fuel starts to re-accumulate after fire as a negative exponential function of time since fire, eventually stabilising at an equilibrium amount. In fact fuel consumption rates vary considerably within a given fire but also between wildfires and prescribed fires, which consume less fuel 33 , 34 . This also points towards our results being conservative due to potentially overestimating the mitigation effect of prescribed burning. Furthermore the accumulation of fuel post fire depends on the vegetation type, soil and climate 35 . Our experiments on the trajectory of risk after the 2019–2020 fire season may be limited by the relatively short amount of time allowed to elapse, which may be insufficient for prescribed burning treatment effects to become apparent. More broadly, our study design involves repeated instances of a single wildfire and thus does not capture the fire regime i.e. the effects of multiple fires in space and time, nor does it factor in future changes in climate, fuel or fuel moisture 36 . We did not model suppression, which is a complex function of fuel type, fuel load, fire behaviour, weather, topography and fire management decision making 37 . Suppression can reduce a range of risks although it is less effective under extreme weather conditions 38 , 39 , 40 .

Fire-prone landscapes around the world have experienced increasingly severe fire weather conditions 20 , 41 . The extreme conditions of the 2019–2020 fire season are projected to occur more frequently in the 21st century 42 . Our results suggest that climate change could seriously undermine the role played by prescribed burning in wildfire risk mitigation, as found in previous studies 43 , 44 . Using landscape simulation modelling in the Blue Mountains and the Woronora Plateau (about 100 km north of our Jervis Bay landscape), Bradstock et al. 43 found that the rate of prescribed burning treatment would need to quintuple or more by 2050 to counteract the effects of climate change on risk mitigation in terms of measures such as area burned and intensity of unplanned fire. Our study assumes that similar or greater treatment rates will be possible in future, which may not be the case depending on the prevalence of suitable prescribed burning weather conditions 45 , 46 . These findings demonstrate that there can be no wildfire risk mitigation without effective climate change mitigation 47 . Our research reinforces the need for comprehensive, transparent and objective evaluation of the effectiveness of existing attempts to mitigate wildfire risk across a range of management objectives, with future work potentially targeting additional management values such as smoke production and associated health impacts, agriculture and tourism impacts, and more nuanced measures of environmental impact. Such an evaluation could inform the trial and implementation of a range of locally tailored risk mitigation measures that address the full complexity of fire across preparation, response and recovery phases, such as prescribed burning, mechanical fuel reduction, anthropogenic ignition management, suppression, planning, construction and community engagement.

We selected four case study landscapes that were extensively impacted during the 2019–2020 fire season: Casino (69,362 ha burnt), Gloucester (132, 281 ha), Blue Mountains (119,626 ha) and Jervis Bay (137,049 ha) (Fig. 1 ; Table 1 ). All landscapes are forested, have considerable Wildland Urban Interface (WUI), and have a history of both wildfire and prescribed fire. Case study landscapes were approximately 200,000 ha (Table 1 ), intended to align with the upper limit of the size distribution of wildfires in local ecosystems (During the 2019–2020 fire season the Gospers Mountain fire, the result of mergers between several large fires in the Blue Mountains World Heritage Area and neighbouring areas, had a final burned area of over 500,000 ha).

The dominant land cover in the Casino landscape is cleared or modified vegetation (58%). The main native vegetation is dry sclerophyll forest with a shrub/grass understorey (17% of the study area) followed by wet sclerophyll forest with a grassy understorey (9%). The Casino area has a population of about 12,000, mostly concentrated in the town of Casino with a small number dispersed on semi-rural properties. Cleared or modified vegetation is also the dominant land cover in the Gloucester landscape (60%). The main native vegetation is wet sclerophyll forest with a grassy understorey (23% of the study area) followed by wet sclerophyll forest with a shrubby understorey (8%). The population is about 30,000, most of which live in the town of Taree on the eastern edge of the landscape with the remainder in smaller towns and semi-rural properties. The main native vegetation in the Blue Mountains landscape is dry sclerophyll forest with a shrubby understorey (63% of the study area) followed by dry sclerophyll forest with a shrub/grass understorey (9%). About 11% of the landscape is cleared or modified vegetation. Around 100,000 people live within the area, mainly living in a string of suburbs along a highway which bisects the region. The main native vegetation in the Jervis Bay landscape is dry sclerophyll forest with a shrubby understorey (40% of the study area) followed by wet sclerophyll forest with a grassy understorey (17%). Around 14% of the landscape is cleared or modified vegetation. About 50,000 people live within the area, mostly in the township of Nowra in the northeast with most of the remainder in coastal suburbs in the southeast.

All four landscapes are examples of the temperature eucalypt forest fire regime niche, characterised by high-productivity, with infrequent low-intensity litter fires in spring and medium-intensity shrub fires in spring and summer 48 . Fire intensity typically ranges from 1000 to 5000 kW m −1 , although extreme weather conditions may support crown fires where fire intensity can reach 10,000–50,000 kW m −1 . Fire interval is around 5–20 years, although can be as long as 20–100 years 48 . Contemporary prescribed burning rates average 2.5% p.a. in the Blue Mountains landscape and range from 0.4 to 0.6% p.a.in the Casino, Gloucester and Jervis Bay landscapes.

Phoenix fire simulator

Fires were simulated using PHOENIX RapidFire v4.0.0.7 49 , which is commonly applied in operations across south-eastern Australian states, including NSW 17 . Fire growth and rate of spread are calculated from Huygens’ propagation principle of fire edge 50 , a modified McArthur Mk5 forest fire behaviour model 51 , 52 and a generalisation of the CSIRO southern grassland fire spread model 53 . A 30-m resolution digital elevation model was included to allow PHOENIX to incorporate topographic effects on fire behaviour. Vegetation mapping and fuel accumulation models for major vegetation types of the case study landscapes were supplied by the NSW Rural Fire Service. Simulations were run at 180m grid resolution and model output included flame length, ember density, convection and intensity.

Scenario parameterisation

PHOENIX estimates fuel loads using separate fuel accumulation curves for combined surface and/near-surface, elevated and bark fuels 54 . These curves are based on a negative exponential growth function and varied among vegetation types 55 . The treatable portion of each case study landscape was defined as all fuels except crop, farm and urban landcover, and comprised 38% of the Casino landscape, 52% of the Gloucester landscape, 70% of the Blue Mountains landscape and 83% of the Jervis Bay landscape. Treatable fuels were separated into two types of management-sized ‘burn blocks’. Edge blocks were adjacent to property and settlements, while landscape blocks were more remote and larger. For edge blocks, a minimum burn interval of 5 years was used as it reflects what is feasible for agencies to achieve while allowing fuel recovery after burning. For landscape blocks, the minimum burn interval is the minimum tolerable fire interval for the majority of the vegetation type within each block, as represented by NSW Department of Planning and Environment mapping. In each case study landscape, 1000 ignition locations were selected based on an empirical model developed and tested for similar forest types 56 . Individual fires were ignited at 11:00 h local time and propagated for 12 h, unless self-extinguished within this period. This time period provides a standardised approach for risk estimation 15 , 57 and was chosen as a compromise between a sufficient amount of time for significant wildfire impacts to be realised 58 , while avoiding the factorial multiplication of weather conditions spanning multiple days. We tested seven combinations of equal edge and landscape treatment (0, 1, 2, 3, 5, 10, 15% p.a.), resulting in a range of fuel age classes for each simulation (Supplementary Figures 7 –10). Half-hourly weather data was drawn from the full record of observations at the nearest Bureau of Meteorology automatic weather station for each case study landscape (Casino 1995–2014, Gloucester 1991–2014, Blue Mountains 1991–2014, Jervis Bay 2000–2014). Simulations were repeated for each of the fire danger categories that had been recorded during the fire season (Spring-Summer) in each case study landscape i.e. Low–Moderate (0–11), High (12–24), Very High (25–49), Severe (50–74), Extreme (75–99) and in Jervis Bay only, Catastrophic (100+). The results from the simulated fires were used to estimate the impact on five management values (see “ Impact estimation ” section below) and then adjusted for the frequency of fire weather conditions contributing to ignitions and fire spread to estimate annualised risk (see “ Risk estimation ” section below).

Two sets of simulations were run to explore the effect of 2019–2020 fire weather conditions on prescribed burning effectiveness: (1) with weather drawn from the long-term historical record of fire season observations, referred to as "control”, (2) with weather drawn only from the 2019–2020 fire season, referred to as “2019–2020”. For Casino the period of active fire in the 2019–2020 fire season was September 2019 to December 2019, for Gloucester and the Blue Mountains this was October 2019 to December 2019, and for Jervis Bay this was December 2019 to January 2020. The relative frequency of fire weather conditions in each scenario was incorporated into risk estimation through a Bayesian decision network (see “ Risk estimation ” section below).

Three sets of simulations were run to explore the trajectory of risk in the aftermath of 2019–2020 fire season: (1) with a fire history excluding the 2019–2020 fire season and with no prescribed burning, referred to as “control”, (2) with a fire history including the 2019–2020 fire season, and with prescribed burning and fuel accumulation through to 2021 i.e. 2 years after the 2019–2020 season (“2021”), and (3) the same as (2) except through to 2025 (“2025”). Due to the very large area burnt during the 2019–2020 season, prescribed burning treatment rates (edge and landscape) were capped at 5% p.a. for Casino, Gloucester and the Blue Mountains. In Jervis Bay, where 81% of the study area was burned by the 2019–2020 fires, edge treatment was capped at 5% p.a. and landscape treatment rate was capped at 1% p.a.

Impact estimation

Effectiveness of prescribed burning at mitigating wildfire impacts was assessed base on area burnt and four management values: house loss, loss of human life, length of powerline damaged and length of road damaged. Area burnt was a direct output from the fire behaviour simulations. The probability of house loss was calculated as a function of predicted ember density, flame length and convection as presented in 59 . House loss was calculated per 180-m model grid cell and then multiplied by the number of houses in that grid cell to estimate the number of houses lost per fire. Statistical loss of human life was based on house loss (using the house loss function), the number of houses exposed (using simulation output) and the number of people exposed to fire 60 . House location and population density data were derived from national datasets ( 61 , Australian Bureau of Statistics) and combined to give the total number of people exposed to fire. Road and powerline location data was supplied by the NSW Department of Planning and Environment. In the absence of empirical data a simple threshold of 10,000 kW/m was used to classify roads or powerlines within each 180-m grid cell as damaged by fire or not. Impacts were estimated from simulation output and the datasets described above, resulting in a distribution of area burnt and impacts on the four management values, corresponding to different weather, treatment and ignition scenarios.

Risk estimation

Building on previous studies 15 , 57 , a Bayesian Decision Networks (BDN) approach was used to generate residual risk estimates and hence evaluate the risk mitigation available from prescribed burning. We adopted the recommendations of Marcot et al. 62 and Chen and Pollino 63 in designing our BDN. A conceptual model was adapted from previous studies of fire management 64 and used to create an influence diagram. In this model fire weather affected ignition probability; fire weather and treatment option (a decision node) affected the distribution of fire sizes; and fire weather, fire size and fire management affected the amount of loss for a given management value. To translate the influence diagram into risk estimates, probability distribution tables were populated for the fire weather node (based on weather station data) and the fire size and management value impact nodes (based on the impact estimation step described above) of the BDN. The BDN then generated output values for each of the different prescribed burning treatment scenarios, based on the influence diagram.

Continuous data were discretised on a log scale across the range of values iteratively to get a relatively even distribution across non-zero values. For each FFDI category, we calculated the average maximum daily FFDI during the fire season for each study area, using the same weather station data used to drive PHOENIX. FFDI values were then separated into fire days (fire recorded within 200 km of the weather station) and non-fire days. The relative frequency of fire days was then used to drive ignitions in the BDN. Raw risk values were the expected node likelihoods for area burnt, house loss, life loss, length of powerline damaged and length of road damaged. These raw values were converted into residual risk values by dividing them by the risk value associated with the zero edge, zero landscape treatment scenario. These risks can be validly compared between regions because they reflect the observed distribution of fire weather conditions in each area. Further details of fire behaviour simulations, impact estimation and risk estimation can be found in 15 , 57 .

Data availability

The datasets generated from fire simulation and risk estimation for the current study are available from the corresponding author on reasonable request. Weather data is available from the Australian Bureau of Meteorology ( http://www.bom.gov.au ). Vegetation mapping and fuel accumulation models are available from the NSW Rural Fire Service ( https://www.rfs.nsw.gov.au ). Fire-sensitive vegetation, road and powerline location data is available from the NSW Department of Planning and Environment ( https://www.environment.nsw.gov.au ).

Code availability

Code to prepare the plots is available on request.

Bowman, D. M. J. S. et al. Fire in the earth system. Science 324 , 481–484 (2009).

Article   ADS   CAS   Google Scholar  

Gill, A. M., Stephens, S. L. & Cary, G. J. The worldwide “wildfire” problem. Ecol. Appl. 23 , 438–454 (2013).

Article   Google Scholar  

Moritz, M. A. et al. Learning to coexist with wildfire. Nature 515 , 58–66 (2014).

Filkov, A. I., Ngo, T., Matthews, S., Telfer, S. & Penman, T. D. Impact of Australia’s catastrophic 2019/20 bushfire season on communities and environment: Retrospective analysis and current trends. J. Saf. Sci. Res. 1 , 44–56 (2020).

Google Scholar  

Duane, A., Castellnou, M. & Brotons, L. Towards a comprehensive look at global drivers of novel extreme wildfire events. Clim. Chang. 165 , 43. https://doi.org/10.1007/s10584-021-03066-4 (2021).

Article   ADS   Google Scholar  

Nolan, R. H. et al. What do the Australian Black Summer fires signify for the global fire crisis?. Fire. 4 , 97. https://doi.org/10.3390/fire4040097 (2021).

Williams, A. P. et al. Observed impacts of anthropogenic climate change on wildfire in California. Earth’s Future 7 , 892–910 (2019).

Abatzoglou, J. T. et al. Projected increases in western US forest fire despite growing fuel constraints. Commun. Earth. Environ. 2 , 227. https://doi.org/10.1038/s43247-021-00299-0 (2021).

Canadell, J. G. et al. Multi-decadal increase of forest burned area in Australia is linked to climate change. Nat. Commun. 12 , 6921. https://doi.org/10.1038/s41467-021-27225-4 (2021).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Wunder, S. et al. Resilient landscapes to prevent catastrophic forest fires: Socioeconomic insights towards a new paradigm. For. Policy Econ. 128 , 102458. https://doi.org/10.1016/j.forpol.2021.102458 (2021).

Burrows, N. & McCaw, L. Prescribed burning in southwestern Australian forests. Front. Ecol. Environ. 11 , e25–e34. https://doi.org/10.1890/120356 (2013).

Duff, T. J., Cawson, J. G. & Penman, T. D. Prescribed burning. In Encyclopedia of Wildfires and Wildland-Urban Interface (WUI) Fires (ed. Manzello, S. L.) 1–11 (Springer International Publishing, 2019).

Penman, T. D., Collins, L., Duff, T. J., Price, O. F. & Cary, G. J. Scientific evidence regarding effectiveness of prescribed burning. In Prescribed Burning in Australasia: The Science, Practice and Politics of Burning the Bush (ed. Bushfire and Natural Hazards CRC) 99–111 (AFAC, 2020).

Russell-Smith, J., McCaw, L. & Leavesley, A. Adaptive prescribed burning in Australia for the early 21st Century—Context, status, challenges. Int. J. Wildland Fire. 29 , 305 (2020).

Cirulis, B. et al. Quantification of inter-regional differences in risk mitigation from prescribed burning across multiple management values. Int. J. Wildland Fire. 29 , 414–426 (2019).

Borchers-Arriagada, N. et al. Smoke health costs and the calculus for wildfires fuel management: A modelling study. Lancet Planet. Health 5 , e608–e619. https://doi.org/10.1016/S2542-5196(21)00198-4 (2021).

Article   PubMed   Google Scholar  

Bentley, P. D. & Penman, T. D. Is there an inherent conflict in managing fire for people and conservation?. Int. J. Wildland Fire 26 , 455–468 (2017).

Driscoll, D. A. et al. Resolving future fire management conflicts using multicriteria decision making. Conserv. Biol. 30 , 196–205 (2016).

Johnston, F. H. et al. Unprecedented health costs of smoke-related PM25 from the 2019–20 Australian megafires. Nat. Sustain. 4 , 42–47. https://doi.org/10.1038/s41893-020-00610-5 (2021).

Collins, L. et al. The 2019/2020 mega-fires exposed Australian ecosystems to an unprecedented extent of high-severity fire. Environ. Res. Lett. 16 , 044029. https://doi.org/10.1088/1748-9326/abeb9e (2021).

Boer, M. M., Resco de Dios, V. & Bradstock, R. A. Unprecedented burn area of Australian mega forest fires. Nat. Clim. Change. 10 , 171–172. https://doi.org/10.1038/s41558-020-0716-1 (2020).

Hislop, S., Stone, C., Haywood, A. & Skidmore, A. The effectiveness of fuel reduction burning for wildfire mitigation in sclerophyll forests. Aust. For. 83 , 255–264 (2020).

Bowman, D. M. J. S., Williamson, G. J., Gibson, R. K., Bradstock, R. A. & Keenan, R. J. The severity and extent of the Australia 2019–20 Eucalyptus forest fires are not the legacy of forest management. Nat. Ecol. Evol. 5 , 1003–1010 (2021).

Price, O. F. & Bradstock, R. A. The efficacy of fuel treatment in mitigating property loss during wildfires: Insights from analysis of the severity of the catastrophic fires in 2009 in Victoria, Australia. J. Environ. Manag. 113 , 146–157 (2012).

Parks, S. A., Holsinger, L. M., Miller, C. & Nelson, C. R. Wildland fire as a self-regulating mechanism: The role of previous burns and weather in limiting fire progression. Ecol. Appl. 25 , 1478–1492 (2015).

Ager, A. A., Houtman, R. M., Day, M. A., Ringo, C. & Palaiologou, P. Tradeoffs between US national forest harvest targets and fuel management to reduce wildfire transmission to the wildland urban interface. For. Ecol. Manag. 434 , 99–109 (2019).

Alcasena, F. J., Ager, A. A., Bailey, J. D., Pineda, N. & Vega-García, C. Towards a comprehensive wildfire management strategy for Mediterranean areas: Framework development and implementation in Catalonia Spain. J. Environ. Manag. 231 , 303–320 (2019).

Ager, A. A. et al. Predicting paradise: Modeling future wildfire disasters in the western US. Sci. Total Environ. 784 , 147057. https://doi.org/10.1016/j.scitotenv.2021.147057 (2021).

Article   ADS   CAS   PubMed   Google Scholar  

Victorian Government. Safer Together: A new approach to reducing the risk of bushfire in Victoria (The State of Victoria, 2015).

Faggian, N. et al. Final Report: An evaluation of fire spread simulators used in Australia (Bureau of Meteorology, 2017).

Penman, T. D. et al. Effect of weather forecast errors on fire growth model projections. Int. J. Wildland Fire. 29 , 983–994 (2020).

Penman, T. D. et al. Improved accuracy of wildfire simulations using fuel hazard estimates based on environmental data. J. Environ. Manag. 301 , 113789. https://doi.org/10.1016/j.jenvman.2021.113789 (2022).

Article   CAS   Google Scholar  

Price, O., Nolan, R. H. & Samson, S. A. Fuel consumption rates in eucalypt forest during hazard reduction burns, cultural burns and wildfires. For. Ecol. Manag. 505 , 119894. https://doi.org/10.1016/j.foreco.2021.119894 (2022).

Nolan, R. H. et al. Framework for assessing live fine fuel loads and biomass consumption during fire. For. Ecol. Manag. 504 , 119830. https://doi.org/10.1016/j.foreco.2021.119830 (2022).

McColl-Gausden, S. C., Bennett, L. T., Duff, T. J., Cawson, J. G. & Penman, T. D. Climatic and edaphic gradients predict variation in wildland fuel hazard in south-eastern Australia. Ecography 43 , 443–455 (2020).

McColl-Gausden, S. C., Bennett, L. T., Ababei, D. A., Clarke, H. G. & Penman, T. D. Future fire regimes increase risks to obligate-seeder forests. Divers. Distrib. https://doi.org/10.1111/ddi.13417 (2021).

Arienti, M. C., Cumming, S. G. & Boutin, S. Empirical models of forest fire initial attack success probabilities: The effects of fuels, anthropogenic linear features, fire weather, and management. Can. J. For. Res. 36 , 3155–3166 (2006).

Plucinski, M. P. Factors affecting containment area and time of Australian forest fires featuring aerial suppression. For. Sci. 58 , 390–398 (2012).

Penman, T. D. et al. Examining the relative effects of fire weather, suppression and fuel treatment on fire behaviour—A simulation study. J. Environ. Manag. 131 , 325–333 (2013).

Cary, G. J., Davies, I. D., Bradstock, R. A., Keane, R. E. & Flannigan, M. D. Importance of fuel treatment for limiting moderate-to-high intensity fire: Findings from comparative fire modelling. Landsc. Ecol. 32 , 1473–1483 (2017).

Jolly, W. M. et al. Climate-induced variations in global wildfire danger from 1979 to 2013. Nat. Commun. 6 , 7537 (2015).

Clarke, H. & Evans, J. P. Exploring the future change space for fire weather in southeast Australia. Theor. Appl. Climatol. 136 , 513–527 (2018).

Bradstock, R. A. et al. Wildfires, fuel treatment and risk mitigation in Australian eucalypt forests: Insights from landscape-scale simulation. J. Environ. Manag. 105 , 66–75 (2012).

King, K. J., Cary, G. J., Bradstock, R. A. & Marsden-Smedley, J. B. Contrasting fire responses to climate and management: Insights from two Australian ecosystems. Glob. Change Biol. 19 , 1223–1235 (2013).

Clarke, H. et al. Climate change effects on the frequency, seasonality and interannual variability of suitable prescribed burning weather conditions in southeastern Australia. Agric. For. Meteorol. 271 , 148–157 (2019).

Kupfer, J. A., Terando, A. J., Gao, P., Teske, C. & Hiers, J. K. Climate change projected to reduce prescribed burning opportunities in the south-eastern United States. Int. J. Wildland Fire 29 , 764–778 (2020).

Abram, N. J. et al. Connections of climate change and variability to large and extreme forest fires in southeast Australia. Commun. Earth Environ. 2 , 8. https://doi.org/10.1038/s43247-020-00065-8 (2021).

Murphy, B. P. et al. Fire regimes of Australia: A pyrogeographic model system. J. Biogeogr. 40 , 1048–1058 (2013).

Tolhurst, K., Shields, B. & Chong, D. PHOENIX: development and application of a bushfire risk-management tool. Aust. J. Emerg. Manag. 23 , 47–54 (2008).

Knight, I. & Coleman, J. A fire perimeter expansion algorithm-based on Huygens wavelet propagation. Int. J. Wildland Fire. 3 , 73–84 (1993).

McArthur, A. G. Fire behaviour in eucalypt forests. In Leaflet 107 (Commonwealth of Australia, 1967).

Noble, I., Gill, A. & Bary, G. McArthur’s fire-danger meters expressed as equations. Aust. J. Ecol. 5 , 201–203 (1980).

Cheney, N., Gould, J. & Catchpole, W. R. Prediction of fire spread in grasslands. Int. J. Wildland Fire. 8 , 1–13 (1998).

Hines, F., Tolhurst, K. G., Wilson, A. A. G. & McCarthy, G. J. Overall Fuel Hazard Assessment Guide , 4th edition (Department of Sustainability and Environment, 2010).

Watson, P. J. Fuel Load Dynamics in NSW Vegetation. Part 1: Forests and Grassy Woodlands. Report to the NSW Rural Fire Service (Centre for Environmental Risk Management of Bushfires, 2011).

Clarke, H., Gibson, R., Cirulis, B., Bradstock, R. A. & Penman, T. D. Developing and testing models of the drivers of anthropogenic and lightning-caused ignition in southeastern Australia. J. Environ. Manag. 235 , 34–41 (2019).

Penman, T. et al. Cost-effective prescribed burning solutions vary between landscapes in eastern Australia. Front. For. Glob. Change. https://doi.org/10.3389/ffgc.2020.00079 (2020).

Cruz, M. G. et al. Anatomy of a catastrophic wildfire: The Black Saturday Kilmore East fire in Victoria, Australia. For. Ecol. Manag. 284 , 269–285 (2012).

Tolhurst, K. G. & Chong, D. M. Assessing potential house losses using PHOENIX RapidFire. In Proceedings of Bushfire CRC & Australasian Fire and Emergency Service Authorities Council (AFAC) 2011 Conference Science Day (ed. Thornton, R. P.) 74-76 (Bushfire CRC, 2011).

Harris, S., Anderson, W., Kilinc, M. & Fogarty, L. The relationship between fire behaviour measures and community loss: An exploratory analysis for developing a bushfire severity scale. Nat. Hazards 63 , 391–415 (2012).

Public Sector Mapping Agencies. Geocoded National Address File Database. https://www.psma.com.au/products/g-naf (2016).

Marcot, B. G., Steventon, J. D., Sutherland, G. D. & McCann, R. K. Guidelines for developing and updating Bayesian belief networks applied to ecological modeling and conservation. Can. J. For. Res. 36 , 3063–3074 (2006).

Chen, S. H. & Pollino, C. A. Good practice in Bayesian network modelling. Environ. Model. Softw. 37 , 134–145 (2012).

Penman, T. D., Cirulis, B. & Marcot, B. G. Bayesian decision network modeling for environmental risk management: A wildfire case study. J. Environ. Manag. 270 , 110735. https://doi.org/10.1016/j.jenvman.2020.110735 (2020).

Download references

Acknowledgements

We acknowledge the New South Wales Government's Department of Planning & Environment for providing funds to support this research via the NSW Bushfire Risk Management Research Hub. Thank you to the NSW Rural Fire Service and NSW Department of Planning and Environment for providing data. The authors declare no conflicts of interest.

Author information

Authors and affiliations.

Centre for Environmental Risk Management of Bushfires, Centre for Sustainable Ecosystem Solutions, University of Wollongong, Wollongong, NSW, 2522, Australia

Hamish Clarke, Owen Price & Ross Bradstock

NSW Bushfire Risk Management Research Hub, University of Wollongong, Wollongong, NSW, 2522, Australia

Hamish Clarke, Owen Price, Matthias M. Boer & Ross Bradstock

Hawkesbury Institute for the Environment, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia

Hamish Clarke, Matthias M. Boer & Ross Bradstock

FLARE Wildfire Research, School of Ecosystem and Forest Sciences, The University of Melbourne, Melbourne, Victoria, 3363, Australia

Hamish Clarke, Brett Cirulis & Trent Penman

NSW Department of Planning and Environment, Science, Economics and Insights Division, Parramatta, NSW, Australia

Ross Bradstock

You can also search for this author in PubMed   Google Scholar

Contributions

H.C.: conceptualisation, formal analysis, writing—original draft, visualization. B.C.: conceptualisation, software, investigation. T.P.: conceptualisation, methodology, writing—review and editing. O.P.: methodology, writing—review and editing. M.M.B.: methodology, writing—review and editing. R.B.: conceptualization, methodology, writing—review and editing.

Corresponding author

Correspondence to Hamish Clarke .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Clarke, H., Cirulis, B., Penman, T. et al. The 2019–2020 Australian forest fires are a harbinger of decreased prescribed burning effectiveness under rising extreme conditions. Sci Rep 12 , 11871 (2022). https://doi.org/10.1038/s41598-022-15262-y

Download citation

Received : 02 March 2022

Accepted : 21 June 2022

Published : 13 July 2022

DOI : https://doi.org/10.1038/s41598-022-15262-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Global impacts of fire regimes on wildland bird diversity.

  • Fátima Arrogante-Funes
  • Inmaculada Aguado
  • Emilio Chuvieco

Fire Ecology (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

forest are best case study

Vibrant Cities Lab

Urban forests case studies: challenges, potential and success in a dozen cities.

There are many challenges facing cities in the 21st century: aging gray infrastructures, social and economic inequality, maxed out systems and grids, extensive urban development. With more than 80 percent of the U.S. population now calling urban areas home, finding solutions to these issues that fit within a city’s budgetary constraints, while also enhancing the city for the better, is of tantamount importance.

  • american forests ,
  • case studies ,
  • challenges ,
  • Urban Forests

Related Resources

forest are best case study

  • ORIENTATION
  • ASSIGNMENTS
  • Program Home Page
  • LIBRARY RESOURCES
  • Getting Help
  • Engaging Course Concepts

Case Study: The Amazon Rainforest

Print

The Amazon in context

Tropical rainforests are often considered to be the “cradles of biodiversity.” Though they cover only about 6% of the Earth’s land surface, they are home to over 50% of global biodiversity. Rainforests also take in massive amounts of carbon dioxide and release oxygen through photosynthesis, which has also given them the nickname “lungs of the planet.” They also store very large amounts of carbon, and so cutting and burning their biomass contributes to global climate change. Many modern medicines are derived from rainforest plants, and several very important food crops originated in the rainforest, including bananas, mangos, chocolate, coffee, and sugar cane.

Aerial view of the Amazon tributary

In order to qualify as a tropical rainforest, an area must receive over 250 centimeters of rainfall each year and have an average temperature above 24 degrees centigrade, as well as never experience frosts. The Amazon rainforest in South America is the largest in the world. The second largest is the Congo in central Africa, and other important rainforests can be found in Central America, the Caribbean, and Southeast Asia. Brazil contains about 40% of the world’s remaining tropical rainforest. Its rainforest covers an area of land about 2/3 the size of the continental United States.

There are countless reasons, both anthropocentric and ecocentric, to value rainforests. But they are one of the most threatened types of ecosystems in the world today. It’s somewhat difficult to estimate how quickly rainforests are being cut down, but estimates range from between 50,000 and 170,000 square kilometers per year. Even the most conservative estimates project that if we keep cutting down rainforests as we are today, within about 100 years there will be none left.

How does a rainforest work?

Rainforests are incredibly complex ecosystems, but understanding a few basics about their ecology will help us understand why clear-cutting and fragmentation are such destructive activities for rainforest biodiversity.

trees in the tropical rain forest

High biodiversity in tropical rainforests means that the interrelationships between organisms are very complex. A single tree may house more than 40 different ant species, each of which has a different ecological function and may alter the habitat in distinct and important ways. Ecologists debate about whether systems that have high biodiversity are stable and resilient, like a spider web composed of many strong individual strands, or fragile, like a house of cards. Both metaphors are likely appropriate in some cases. One thing we can be certain of is that it is very difficult in a rainforest system, as in most other ecosystems, to affect just one type of organism. Also, clear cutting one small area may damage hundreds or thousands of established species interactions that reach beyond the cleared area.

Pollination is a challenge for rainforest trees because there are so many different species, unlike forests in the temperate regions that are often dominated by less than a dozen tree species. One solution is for individual trees to grow close together, making pollination simpler, but this can make that species vulnerable to extinction if the one area where it lives is clear cut. Another strategy is to develop a mutualistic relationship with a long-distance pollinator, like a specific bee or hummingbird species. These pollinators develop mental maps of where each tree of a particular species is located and then travel between them on a sort of “trap-line” that allows trees to pollinate each other. One problem is that if a forest is fragmented then these trap-line connections can be disrupted, and so trees can fail to be pollinated and reproduce even if they haven’t been cut.

The quality of rainforest soils is perhaps the most surprising aspect of their ecology. We might expect a lush rainforest to grow from incredibly rich, fertile soils, but actually, the opposite is true. While some rainforest soils that are derived from volcanic ash or from river deposits can be quite fertile, generally rainforest soils are very poor in nutrients and organic matter. Rainforests hold most of their nutrients in their live vegetation, not in the soil. Their soils do not maintain nutrients very well either, which means that existing nutrients quickly “leech” out, being carried away by water as it percolates through the soil. Also, soils in rainforests tend to be acidic, which means that it’s difficult for plants to access even the few existing nutrients. The section on slash and burn agriculture in the previous module describes some of the challenges that farmers face when they attempt to grow crops on tropical rainforest soils, but perhaps the most important lesson is that once a rainforest is cut down and cleared away, very little fertility is left to help a forest regrow.

What is driving deforestation in the Amazon?

Many factors contribute to tropical deforestation, but consider this typical set of circumstances and processes that result in rapid and unsustainable rates of deforestation. This story fits well with the historical experience of Brazil and other countries with territory in the Amazon Basin.

Population growth and poverty encourage poor farmers to clear new areas of rainforest, and their efforts are further exacerbated by government policies that permit landless peasants to establish legal title to land that they have cleared.

At the same time, international lending institutions like the World Bank provide money to the national government for large-scale projects like mining, construction of dams, new roads, and other infrastructure that directly reduces the forest or makes it easier for farmers to access new areas to clear.

The activities most often encouraging new road development are timber harvesting and mining. Loggers cut out the best timber for domestic use or export, and in the process knock over many other less valuable trees. Those trees are eventually cleared and used for wood pulp, or burned, and the area is converted into cattle pastures. After a few years, the vegetation is sufficiently degraded to make it not profitable to raise cattle, and the land is sold to poor farmers seeking out a subsistence living.

Regardless of how poor farmers get their land, they often are only able to gain a few years of decent crop yields before the poor quality of the soil overwhelms their efforts, and then they are forced to move on to another plot of land. Small-scale farmers also hunt for meat in the remaining fragmented forest areas, which reduces the biodiversity in those areas as well.

Another important factor not mentioned in the scenario above is the clearing of rainforest for industrial agriculture plantations of bananas, pineapples, and sugar cane. These crops are primarily grown for export, and so an additional driver to consider is consumer demand for these crops in countries like the United States.

These cycles of land use, which are driven by poverty and population growth as well as government policies, have led to the rapid loss of tropical rainforests. What is lost in many cases is not simply biodiversity, but also valuable renewable resources that could sustain many generations of humans to come. Efforts to protect rainforests and other areas of high biodiversity is the topic of the next section.

Forest Biodiversity

EET Logo

Case Study: Seeing the Forest for the Trees

What is so important about forests.

Fish eye view of a maine forest

Biodiversity and Healthy Forests

kennebago river

The forests of Maine are among the most diverse in North America. They include 14 conifer, or cone bearing, and 52 deciduous, or broadleaf, trees. Since their establishment nearly 6,000 years ago, the composition and extent of forests of Maine have changed, as a result of both natural and man-made events. In the early days of European Settlement, much of the forest of Maine (68%) was cleared to make way for farming. Since the early 1900's the forest has regenerated.

Become Involved in Forest Monitoring

PLT teachers

Project Learning Tree (PLT) is a multi-disciplinary environmental education program for educators and students in Pre-K through Grade 12. The American Forest Foundation supports Project Learning Tree. Any citizen scientist can use the PLT forest-monitoring techniques illustrated in this chapter. To learn more about Project Learning Tree and forest monitoring techniques consult the Teaching Notes for resource links.

« Previous Page       Next Page »

  • Seeing the Forest for the Trees: What's in Your Woods?
  • Teaching Notes
  • Step-by-Step Instructions
  • Tools and Data
  • Going Further

SERC

  • About this Site
  • Accessibility

Citing and Terms of Use

Material on this page is offered under a Creative Commons license unless otherwise noted below.

Show terms of use for text on this page »

Show terms of use for media on this page »

Fish eye view of a maine forest

  • None found in this page
  • Initial Publication Date: August 10, 2010
  • Short URL: https://serc.carleton.edu/48483 What's This?

Sustainable Landscapes

Forest landscape restoration, protected & conserved areas, forest sector transformation & valuation, forests & climate, deforestation- and conversion-free supply chains & governance.

  • Case Studies
  • #Connect2Forests
  • Terms & Conditions

Case studies

Lessons from practitioners on conserving forests for nature and people.

Insights from WWF experts on solutions to safeguard forests.

Stories of people and places behind WWF's forest conservation work around the world.

Explore our work through the eyes of our people and partners.

wwf approach

Livestock farmers lead the way in implementing sustainable land use practices and reducing deforestation in peru, how argentina could emerge as a leader in mainstreaming beef free from deforestation, using wood forensic science to deter corruption and illegality in the timber trade, community-based forest monitoring in colombia, brazil's amazon soy moratorium, share your experience.

Replication of successful approaches and learning lessons from other forest practitioners can make conservation work more impactful. Take a few minutes to capture and share your experience and tips.

Machine Learning and image analysis towards improved energy management in Industry 4.0: a practical case study on quality control

  • Original Article
  • Open access
  • Published: 13 May 2024
  • Volume 17 , article number  48 , ( 2024 )

Cite this article

You have full access to this open access article

forest are best case study

  • Mattia Casini 1 ,
  • Paolo De Angelis 1 ,
  • Marco Porrati 2 ,
  • Paolo Vigo 1 ,
  • Matteo Fasano 1 ,
  • Eliodoro Chiavazzo 1 &
  • Luca Bergamasco   ORCID: orcid.org/0000-0001-6130-9544 1  

155 Accesses

1 Altmetric

Explore all metrics

With the advent of Industry 4.0, Artificial Intelligence (AI) has created a favorable environment for the digitalization of manufacturing and processing, helping industries to automate and optimize operations. In this work, we focus on a practical case study of a brake caliper quality control operation, which is usually accomplished by human inspection and requires a dedicated handling system, with a slow production rate and thus inefficient energy usage. We report on a developed Machine Learning (ML) methodology, based on Deep Convolutional Neural Networks (D-CNNs), to automatically extract information from images, to automate the process. A complete workflow has been developed on the target industrial test case. In order to find the best compromise between accuracy and computational demand of the model, several D-CNNs architectures have been tested. The results show that, a judicious choice of the ML model with a proper training, allows a fast and accurate quality control; thus, the proposed workflow could be implemented for an ML-powered version of the considered problem. This would eventually enable a better management of the available resources, in terms of time consumption and energy usage.

Similar content being viewed by others

forest are best case study

Towards Operation Excellence in Automobile Assembly Analysis Using Hybrid Image Processing

forest are best case study

Deep Learning Based Algorithms for Welding Edge Points Detection

forest are best case study

Artificial Intelligence: Prospect in Mechanical Engineering Field—A Review

Avoid common mistakes on your manuscript.

Introduction

An efficient use of energy resources in industry is key for a sustainable future (Bilgen, 2014 ; Ocampo-Martinez et al., 2019 ). The advent of Industry 4.0, and of Artificial Intelligence, have created a favorable context for the digitalisation of manufacturing processes. In this view, Machine Learning (ML) techniques have the potential for assisting industries in a better and smart usage of the available data, helping to automate and improve operations (Narciso & Martins, 2020 ; Mazzei & Ramjattan, 2022 ). For example, ML tools can be used to analyze sensor data from industrial equipment for predictive maintenance (Carvalho et al., 2019 ; Dalzochio et al., 2020 ), which allows identification of potential failures in advance, and thus to a better planning of maintenance operations with reduced downtime. Similarly, energy consumption optimization (Shen et al., 2020 ; Qin et al., 2020 ) can be achieved via ML-enabled analysis of available consumption data, with consequent adjustments of the operating parameters, schedules, or configurations to minimize energy consumption while maintaining an optimal production efficiency. Energy consumption forecast (Liu et al., 2019 ; Zhang et al., 2018 ) can also be improved, especially in industrial plants relying on renewable energy sources (Bologna et al., 2020 ; Ismail et al., 2021 ), by analysis of historical data on weather patterns and forecast, to optimize the usage of energy resources, avoid energy peaks, and leverage alternative energy sources or storage systems (Li & Zheng, 2016 ; Ribezzo et al., 2022 ; Fasano et al., 2019 ; Trezza et al., 2022 ; Mishra et al., 2023 ). Finally, ML tools can also serve for fault or anomaly detection (Angelopoulos et al., 2019 ; Md et al., 2022 ), which allows prompt corrective actions to optimize energy usage and prevent energy inefficiencies. Within this context, ML techniques for image analysis (Casini et al., 2024 ) are also gaining increasing interest (Chen et al., 2023 ), for their application to e.g. materials design and optimization (Choudhury, 2021 ), quality control (Badmos et al., 2020 ), process monitoring (Ho et al., 2021 ), or detection of machine failures by converting time series data from sensors to 2D images (Wen et al., 2017 ).

Incorporating digitalisation and ML techniques into Industry 4.0 has led to significant energy savings (Maggiore et al., 2021 ; Nota et al., 2020 ). Projects adopting these technologies can achieve an average of 15% to 25% improvement in energy efficiency in the processes where they were implemented (Arana-Landín et al., 2023 ). For instance, in predictive maintenance, ML can reduce energy consumption by optimizing the operation of machinery (Agrawal et al., 2023 ; Pan et al., 2024 ). In process optimization, ML algorithms can improve energy efficiency by 10-20% by analyzing and adjusting machine operations for optimal performance, thereby reducing unnecessary energy usage (Leong et al., 2020 ). Furthermore, the implementation of ML algorithms for optimal control can lead to energy savings of 30%, because these systems can make real-time adjustments to production lines, ensuring that machines operate at peak energy efficiency (Rahul & Chiddarwar, 2023 ).

In automotive manufacturing, ML-driven quality control can lead to energy savings by reducing the need for redoing parts or running inefficient production cycles (Vater et al., 2019 ). In high-volume production environments such as consumer electronics, novel computer-based vision models for automated detection and classification of damaged packages from intact packages can speed up operations and reduce waste (Shahin et al., 2023 ). In heavy industries like steel or chemical manufacturing, ML can optimize the energy consumption of large machinery. By predicting the optimal operating conditions and maintenance schedules, these systems can save energy costs (Mypati et al., 2023 ). Compressed air is one of the most energy-intensive processes in manufacturing. ML can optimize the performance of these systems, potentially leading to energy savings by continuously monitoring and adjusting the air compressors for peak efficiency, avoiding energy losses due to leaks or inefficient operation (Benedetti et al., 2019 ). ML can also contribute to reducing energy consumption and minimizing incorrectly produced parts in polymer processing enterprises (Willenbacher et al., 2021 ).

Here we focus on a practical industrial case study of brake caliper processing. In detail, we focus on the quality control operation, which is typically accomplished by human visual inspection and requires a dedicated handling system. This eventually implies a slower production rate, and inefficient energy usage. We thus propose the integration of an ML-based system to automatically perform the quality control operation, without the need for a dedicated handling system and thus reduced operation time. To this, we rely on ML tools able to analyze and extract information from images, that is, deep convolutional neural networks, D-CNNs (Alzubaidi et al., 2021 ; Chai et al., 2021 ).

figure 1

Sample 3D model (GrabCAD ) of the considered brake caliper: (a) part without defects, and (b) part with three sample defects, namely a scratch, a partially missing letter in the logo, and a circular painting defect (shown by the yellow squares, from left to right respectively)

A complete workflow for the purpose has been developed and tested on a real industrial test case. This includes: a dedicated pre-processing of the brake caliper images, their labelling and analysis using two dedicated D-CNN architectures (one for background removal, and one for defect identification), post-processing and analysis of the neural network output. Several different D-CNN architectures have been tested, in order to find the best model in terms of accuracy and computational demand. The results show that, a judicious choice of the ML model with a proper training, allows to obtain fast and accurate recognition of possible defects. The best-performing models, indeed, reach over 98% accuracy on the target criteria for quality control, and take only few seconds to analyze each image. These results make the proposed workflow compliant with the typical industrial expectations; therefore, in perspective, it could be implemented for an ML-powered version of the considered industrial problem. This would eventually allow to achieve better performance of the manufacturing process and, ultimately, a better management of the available resources in terms of time consumption and energy expense.

figure 2

Different neural network architectures: convolutional encoder (a) and encoder-decoder (b)

The industrial quality control process that we target is the visual inspection of manufactured components, to verify the absence of possible defects. Due to industrial confidentiality reasons, a representative open-source 3D geometry (GrabCAD ) of the considered parts, similar to the original one, is shown in Fig. 1 . For illustrative purposes, the clean geometry without defects (Fig.  1 (a)) is compared to the geometry with three possible sample defects, namely: a scratch on the surface of the brake caliper, a partially missing letter in the logo, and a circular painting defect (highlighted by the yellow squares, from left to right respectively, in Fig.  1 (b)). Note that, one or multiple defects may be present on the geometry, and that other types of defects may also be considered.

Within the industrial production line, this quality control is typically time consuming, and requires a dedicated handling system with the associated slow production rate and energy inefficiencies. Thus, we developed a methodology to achieve an ML-powered version of the control process. The method relies on data analysis and, in particular, on information extraction from images of the brake calipers via Deep Convolutional Neural Networks, D-CNNs (Alzubaidi et al., 2021 ). The designed workflow for defect recognition is implemented in the following two steps: 1) removal of the background from the image of the caliper, in order to reduce noise and irrelevant features in the image, ultimately rendering the algorithms more flexible with respect to the background environment; 2) analysis of the geometry of the caliper to identify the different possible defects. These two serial steps are accomplished via two different and dedicated neural networks, whose architecture is discussed in the next section.

Convolutional Neural Networks (CNNs) pertain to a particular class of deep neural networks for information extraction from images. The feature extraction is accomplished via convolution operations; thus, the algorithms receive an image as an input, analyze it across several (deep) neural layers to identify target features, and provide the obtained information as an output (Casini et al., 2024 ). Regarding this latter output, different formats can be retrieved based on the considered architecture of the neural network. For a numerical data output, such as that required to obtain a classification of the content of an image (Bhatt et al., 2021 ), e.g. correct or defective caliper in our case, a typical layout of the network involving a convolutional backbone, and a fully-connected network can be adopted (see Fig. 2 (a)). On the other hand, if the required output is still an image, a more complex architecture with a convolutional backbone (encoder) and a deconvolutional head (decoder) can be used (see Fig. 2 (b)).

As previously introduced, our workflow targets the analysis of the brake calipers in a two-step procedure: first, the removal of the background from the input image (e.g. Fig. 1 ); second, the geometry of the caliper is analyzed and the part is classified as acceptable or not depending on the absence or presence of any defect, respectively. Thus, in the first step of the procedure, a dedicated encoder-decoder network (Minaee et al., 2021 ) is adopted to classify the pixels in the input image as brake or background. The output of this model will then be a new version of the input image, where the background pixels are blacked. This helps the algorithms in the subsequent analysis to achieve a better performance, and to avoid bias due to possible different environments in the input image. In the second step of the workflow, a dedicated encoder architecture is adopted. Here, the previous background-filtered image is fed to the convolutional network, and the geometry of the caliper is analyzed to spot possible defects and thus classify the part as acceptable or not. In this work, both deep learning models are supervised , that is, the algorithms are trained with the help of human-labeled data (LeCun et al., 2015 ). Particularly, the first algorithm for background removal is fed with the original image as well as with a ground truth (i.e. a binary image, also called mask , consisting of black and white pixels) which instructs the algorithm to learn which pixels pertain to the brake and which to the background. This latter task is usually called semantic segmentation in Machine Learning and Deep Learning (Géron, 2022 ). Analogously, the second algorithm is fed with the original image (without the background) along with an associated mask, which serves the neural networks with proper instructions to identify possible defects on the target geometry. The required pre-processing of the input images, as well as their use for training and validation of the developed algorithms, are explained in the next sections.

Image pre-processing

Machine Learning approaches rely on data analysis; thus, the quality of the final results is well known to depend strongly on the amount and quality of the available data for training of the algorithms (Banko & Brill, 2001 ; Chen et al., 2021 ). In our case, the input images should be well-representative for the target analysis and include adequate variability of the possible features to allow the neural networks to produce the correct output. In this view, the original images should include, e.g., different possible backgrounds, a different viewing angle of the considered geometry and a different light exposure (as local light reflections may affect the color of the geometry and thus the analysis). The creation of such a proper dataset for specific cases is not always straightforward; in our case, for example, it would imply a systematic acquisition of a large set of images in many different conditions. This would require, in turn, disposing of all the possible target defects on the real parts, and of an automatic acquisition system, e.g., a robotic arm with an integrated camera. Given that, in our case, the initial dataset could not be generated on real parts, we have chosen to generate a well-balanced dataset of images in silico , that is, based on image renderings of the real geometry. The key idea was that, if the rendered geometry is sufficiently close to a real photograph, the algorithms may be instructed on artificially-generated images and then tested on a few real ones. This approach, if properly automatized, clearly allows to easily produce a large amount of images in all the different conditions required for the analysis.

In a first step, starting from the CAD file of the brake calipers, we worked manually using the open-source software Blender (Blender ), to modify the material properties and achieve a realistic rendering. After that, defects were generated by means of Boolean (subtraction) operations between the geometry of the brake caliper and ad-hoc geometries for each defect. Fine tuning on the generated defects has allowed for a realistic representation of the different defects. Once the results were satisfactory, we developed an automated Python code for the procedures, to generate the renderings in different conditions. The Python code allows to: load a given CAD geometry, change the material properties, set different viewing angles for the geometry, add different types of defects (with given size, rotation and location on the geometry of the brake caliper), add a custom background, change the lighting conditions, render the scene and save it as an image.

In order to make the dataset as varied as possible, we introduced three light sources into the rendering environment: a diffuse natural lighting to simulate daylight conditions, and two additional artificial lights. The intensity of each light source and the viewing angle were then made vary randomly, to mimic different daylight conditions and illuminations of the object. This procedure was designed to provide different situations akin to real use, and to make the model invariant to lighting conditions and camera position. Moreover, to provide additional flexibility to the model, the training dataset of images was virtually expanded using data augmentation (Mumuni & Mumuni, 2022 ), where saturation, brightness and contrast were made randomly vary during training operations. This procedure has allowed to consistently increase the number and variety of the images in the training dataset.

The developed automated pre-processing steps easily allows for batch generation of thousands of different images to be used for training of the neural networks. This possibility is key for proper training of the neural networks, as the variability of the input images allows the models to learn all the possible features and details that may change during real operating conditions.

figure 3

Examples of the ground truth for the two target tasks: background removal (a) and defects recognition (b)

The first tests using such virtual database have shown that, although the generated images were very similar to real photographs, the models were not able to properly recognize the target features in the real images. Thus, in a tentative to get closer to a proper set of real images, we decided to adopt a hybrid dataset, where the virtually generated images were mixed with the available few real ones. However, given that some possible defects were missing in the real images, we also decided to manipulate the images to introduce virtual defects on real images. The obtained dataset finally included more than 4,000 images, where 90% was rendered, and 10% was obtained from real images. To avoid possible bias in the training dataset, defects were present in 50% of the cases in both the rendered and real image sets. Thus, in the overall dataset, the real original images with no defects were 5% of the total.

Along with the code for the rendering and manipulation of the images, dedicated Python routines were developed to generate the corresponding data labelling for the supervised training of the networks, namely the image masks. Particularly, two masks were generated for each input image: one for the background removal operation, and one for the defect identification. In both cases, the masks consist of a binary (i.e. black and white) image where all the pixels of a target feature (i.e. the geometry or defect) are assigned unitary values (white); whereas, all the remaining pixels are blacked (zero values). An example of these masks in relation to the geometry in Fig. 1 is shown in Fig. 3 .

All the generated images were then down-sampled, that is, their resolution was reduced to avoid unnecessary large computational times and (RAM) memory usage while maintaining the required level of detail for training of the neural networks. Finally, the input images and the related masks were split into a mosaic of smaller tiles, to achieve a suitable size for feeding the images to the neural networks with even more reduced requirements on the RAM memory. All the tiles were processed, and the whole image reconstructed at the end of the process to visualize the overall final results.

figure 4

Confusion matrix for accuracy assessment of the neural networks models

Choice of the model

Within the scope of the present application, a wide range of possibly suitable models is available (Chen et al., 2021 ). In general, the choice of the best model for a given problem should be made on a case-by-case basis, considering an acceptable compromise between the achievable accuracy and computational complexity/cost. Too simple models can indeed be very fast in the response yet have a reduced accuracy. On the other hand, more complex models can generally provide more accurate results, although typically requiring larger amounts of data for training, and thus longer computational times and energy expense. Hence, testing has the crucial role to allow identification of the best trade-off between these two extreme cases. A benchmark for model accuracy can generally be defined in terms of a confusion matrix, where the model response is summarized into the following possibilities: True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN). This concept can be summarized as shown in Fig. 4 . For the background removal, Positive (P) stands for pixels belonging to the brake caliper, while Negative (N) for background pixels. For the defect identification model, Positive (P) stands for non-defective geometry, whereas Negative (N) stands for defective geometries. With respect to these two cases, the True/False statements stand for correct or incorrect identification, respectively. The model accuracy can be therefore assessed as Géron ( 2022 )

Based on this metrics, the accuracy for different models can then be evaluated on a given dataset, where typically 80% of the data is used for training and the remaining 20% for validation. For the defect recognition stage, the following models were tested: VGG-16 (Simonyan & Zisserman, 2014 ), ResNet50, ResNet101, ResNet152 (He et al., 2016 ), Inception V1 (Szegedy et al., 2015 ), Inception V4 and InceptionResNet V2 (Szegedy et al., 2017 ). Details on the assessment procedure for the different models are provided in the Supplementary Information file. For the background removal stage, the DeepLabV3 \(+\) (Chen et al., 2018 ) model was chosen as the first option, and no additional models were tested as it directly provided satisfactory results in terms of accuracy and processing time. This gives preliminary indication that, from the point of view of the task complexity of the problem, the defect identification stage can be more demanding with respect to the background removal operation for the case study at hand. Besides the assessment of the accuracy according to, e.g., the metrics discussed above, additional information can be generally collected, such as too low accuracy (indicating insufficient amount of training data), possible bias of the models on the data (indicating a non-well balanced training dataset), or other specific issues related to missing representative data in the training dataset (Géron, 2022 ). This information helps both to correctly shape the training dataset, and to gather useful indications for the fine tuning of the model after its choice has been made.

Background removal

An initial bias of the model for background removal arose on the color of the original target geometry (red color). The model was indeed identifying possible red spots on the background as part of the target geometry as an unwanted output. To improve the model flexibility, and thus its accuracy on the identification of the background, the training dataset was expanded using data augmentation (Géron, 2022 ). This technique allows to artificially increase the size of the training dataset by applying various transformations to the available images, with the goal to improve the performance and generalization ability of the models. This approach typically involves applying geometric and/or color transformations to the original images; in our case, to account for different viewing angles of the geometry, different light exposures, and different color reflections and shadowing effects. These improvements of the training dataset proved to be effective on the performance for the background removal operation, with a validation accuracy finally ranging above 99% and model response time around 1-2 seconds. An example of the output of this operation for the geometry in Fig.  1 is shown in Fig. 5 .

While the results obtained were satisfactory for the original (red) color of the calipers, we decided to test the model ability to be applied on brake calipers of other colors as well. To this, the model was trained and tested on a grayscale version of the images of the calipers, which allows to completely remove any possible bias of the model on a specific color. In this case, the validation accuracy of the model was still obtained to range above 99%; thus, this approach was found to be particularly interesting to make the model suitable for background removal operation even on images including calipers of different colors.

figure 5

Target geometry after background removal

Defect recognition

An overview of the performance of the tested models for the defect recognition operation on the original geometry of the caliper is reported in Table 1 (see also the Supplementary Information file for more details on the assessment of different models). The results report on the achieved validation accuracy ( \(A_v\) ) and on the number of parameters ( \(N_p\) ), with this latter being the total number of parameters that can be trained for each model (Géron, 2022 ) to determine the output. Here, this quantity is adopted as an indicator of the complexity of each model.

figure 6

Accuracy (a) and loss function (b) curves for the Resnet101 model during training

As the results in Table 1 show, the VGG-16 model was quite unprecise for our dataset, eventually showing underfitting (Géron, 2022 ). Thus, we decided to opt for the Resnet and Inception families of models. Both these families of models have demonstrated to be suitable for handling our dataset, with slightly less accurate results being provided by the Resnet50 and InceptionV1. The best results were obtained using Resnet101 and InceptionV4, with very high final accuracy and fast processing time (in the order \(\sim \) 1 second). Finally, Resnet152 and InceptionResnetV2 models proved to be slightly too complex or slower for our case; they indeed provided excellent results but taking longer response times (in the order of \(\sim \) 3-5 seconds). The response time is indeed affected by the complexity ( \(N_p\) ) of the model itself, and by the hardware used. In our work, GPUs were used for training and testing all the models, and the hardware conditions were kept the same for all models.

Based on the results obtained, ResNet101 model was chosen as the best solution for our application, in terms of accuracy and reduced complexity. After fine-tuning operations, the accuracy that we obtained with this model reached nearly 99%, both in the validation and test datasets. This latter includes target real images, that the models have never seen before; thus, it can be used for testing of the ability of the models to generalize the information learnt during the training/validation phase.

The trend in the accuracy increase and loss function decrease during training of the Resnet101 model on the original geometry are shown in Fig. 6 (a) and (b), respectively. Particularly, the loss function quantifies the error between the predicted output during training of the model and the actual target values in the dataset. In our case, the loss function is computed using the cross-entropy function and the Adam optimiser (Géron, 2022 ). The error is expected to reduce during the training, which eventually leads to more accurate predictions of the model on previously-unseen data. The combination of accuracy and loss function trends, along with other control parameters, is typically used and monitored to evaluate the training process, and avoid e.g. under- or over-fitting problems (Géron, 2022 ). As Fig. 6 (a) shows, the accuracy experiences a sudden step increase during the very first training phase (epochs, that is, the number of times the complete database is repeatedly scrutinized by the model during its training (Géron, 2022 )). The accuracy then increases in a smooth fashion with the epochs, until an asymptotic value is reached both for training and validation accuracy. These trends in the two accuracy curves can generally be associated with a proper training; indeed, being the two curves close to each other may be interpreted as an absence of under-fitting problems. On the other hand, Fig. 6 (b) shows that the loss function curves are close to each other, with a monotonically-decreasing trend. This can be interpreted as an absence of over-fitting problems, and thus of proper training of the model.

figure 7

Final results of the analysis on the defect identification: (a) considered input geometry, (b), (c) and (d) identification of a scratch on the surface, partially missing logo, and painting defect respectively (highlighted in the red frames)

Finally, an example output of the overall analysis is shown in Fig. 7 , where the considered input geometry is shown (a), along with the identification of the defects (b), (c) and (d) obtained from the developed protocol. Note that, here the different defects have been separated in several figures for illustrative purposes; however, the analysis yields the identification of defects on one single image. In this work, a binary classification was performed on the considered brake calipers, where the output of the models allows to discriminate between defective or non-defective components based on the presence or absence of any of the considered defects. Note that, fine tuning of this discrimination is ultimately with the user’s requirements. Indeed, the model output yields as the probability (from 0 to 100%) of the possible presence of defects; thus, the discrimination between a defective or non-defective part is ultimately with the user’s choice of the acceptance threshold for the considered part (50% in our case). Therefore, stricter or looser criteria can be readily adopted. Eventually, for particularly complex cases, multiple models may also be used concurrently for the same task, and the final output defined based on a cross-comparison of the results from different models. As a last remark on the proposed procedure, note that here we adopted a binary classification based on the presence or absence of any defect; however, further classification of the different defects could also be implemented, to distinguish among different types of defects (multi-class classification) on the brake calipers.

Energy saving

Illustrative scenarios.

Given that the proposed tools have not yet been implemented and tested within a real industrial production line, we analyze here three perspective scenarios to provide a practical example of the potential for energy savings in an industrial context. To this, we consider three scenarios, which compare traditional human-based control operations and a quality control system enhanced by the proposed Machine Learning (ML) tools. Specifically, here we analyze a generic brake caliper assembly line formed by 14 stations, as outlined in Table 1 in the work by Burduk and Górnicka ( 2017 ). This assembly line features a critical inspection station dedicated to defect detection, around which we construct three distinct scenarios to evaluate the efficacy of traditional human-based control operations versus a quality control system augmented by the proposed ML-based tools, namely:

First Scenario (S1): Human-Based Inspection. The traditional approach involves a human operator responsible for the inspection tasks.

Second Scenario (S2): Hybrid Inspection. This scenario introduces a hybrid inspection system where our proposed ML-based automatic detection tool assists the human inspector. The ML tool analyzes the brake calipers and alerts the human inspector only when it encounters difficulties in identifying defects, specifically when the probability of a defect being present or absent falls below a certain threshold. This collaborative approach aims to combine the precision of ML algorithms with the experience of human inspectors, and can be seen as a possible transition scenario between the human-based and a fully-automated quality control operation.

Third Scenario (S3): Fully Automated Inspection. In the final scenario, we conceive a completely automated defect inspection station powered exclusively by our ML-based detection system. This setup eliminates the need for human intervention, relying entirely on the capabilities of the ML tools to identify defects.

For simplicity, we assume that all the stations are aligned in series without buffers, minimizing unnecessary complications in our estimations. To quantify the beneficial effects of implementing ML-based quality control, we adopt the Overall Equipment Effectiveness (OEE) as the primary metric for the analysis. OEE is a comprehensive measure derived from the product of three critical factors, as outlined by Nota et al. ( 2020 ): Availability (the ratio of operating time with respect to planned production time); Performance (the ratio of actual output with respect to the theoretical maximum output); and Quality (the ratio of the good units with respect to the total units produced). In this section, we will discuss the details of how we calculate each of these factors for the various scenarios.

To calculate Availability ( A ), we consider an 8-hour work shift ( \(t_{shift}\) ) with 30 minutes of breaks ( \(t_{break}\) ) during which we assume production stop (except for the fully automated scenario), and 30 minutes of scheduled downtime ( \(t_{sched}\) ) required for machine cleaning and startup procedures. For unscheduled downtime ( \(t_{unsched}\) ), primarily due to machine breakdowns, we assume an average breakdown probability ( \(\rho _{down}\) ) of 5% for each machine, with an average repair time of one hour per incident ( \(t_{down}\) ). Based on these assumptions, since the Availability represents the ratio of run time ( \(t_{run}\) ) to production time ( \(t_{pt}\) ), it can be calculated using the following formula:

with the unscheduled downtime being computed as follows:

where N is the number of machines in the production line and \(1-\left( 1-\rho _{down}\right) ^{N}\) represents the probability that at least one machine breaks during the work shift. For the sake of simplicity, the \(t_{down}\) is assumed constant regardless of the number of failures.

Table  2 presents the numerical values used to calculate Availability in the three scenarios. In the second scenario, we can observe that integrating the automated station leads to a decrease in the first factor of the OEE analysis, which can be attributed to the additional station for automated quality-control (and the related potential failure). This ultimately increases the estimation of the unscheduled downtime. In the third scenario, the detrimental effect of the additional station compensates the beneficial effect of the automated quality control on reducing the need for pauses during operator breaks; thus, the Availability for the third scenario yields as substantially equivalent to the first one (baseline).

The second factor of OEE, Performance ( P ), assesses the operational efficiency of production equipment relative to its maximum designed speed ( \(t_{line}\) ). This evaluation includes accounting for reductions in cycle speed and minor stoppages, collectively termed as speed losses . These losses are challenging to measure in advance, as performance is typically measured using historical data from the production line. For this analysis, we are constrained to hypothesize a reasonable estimate of 60 seconds of time lost to speed losses ( \(t_{losses}\) ) for each work cycle. Although this assumption may appear strong, it will become evident later that, within the context of this analysis – particularly regarding the impact of automated inspection on energy savings – the Performance (like the Availability) is only marginally influenced by introducing an automated inspection station. To account for the effect of automated inspection on the assembly line speed, we keep the time required by the other 13 stations ( \(t^*_{line}\) ) constant while varying the time allocated for visual inspection ( \(t_{inspect}\) ). According to Burduk and Górnicka ( 2017 ), the total operation time of the production line, excluding inspection, is 1263 seconds, with manual visual inspection taking 38 seconds. For the fully automated third scenario, we assume an inspection time of 5 seconds, which encloses the photo collection, pre-processing, ML-analysis, and post-processing steps. In the second scenario, instead, we add an additional time to the pure automatic case to consider the cases when the confidence of the ML model falls below 90%. We assume this happens once in every 10 inspections, which is a conservative estimate, higher than that we observed during model testing. This results in adding 10% of the human inspection time to the fully automated time. Thus, when \(t_{losses}\) are known, Performance can be expressed as follows:

The calculated values for Performance are presented in Table  3 , and we can note that the modification in inspection time has a negligible impact on this factor since it does not affect the speed loss or, at least to our knowledge, there is no clear evidence to suggest that the introduction of a new inspection station would alter these losses. Moreover, given the specific linear layout of the considered production line, the inspection time change has only a marginal effect on enhancing the production speed. However, this approach could potentially bias our scenario towards always favouring automation. To evaluate this hypothesis, a sensitivity analysis which explores scenarios where the production line operates at a faster pace will be discussed in the next subsection.

The last factor, Quality ( Q ), quantifies the ratio of compliant products out of the total products manufactured, effectively filtering out items that fail to meet the quality standards due to defects. Given the objective of our automated algorithm, we anticipate this factor of the OEE to be significantly enhanced by implementing the ML-based automated inspection station. To estimate it, we assume a constant defect probability for the production line ( \(\rho _{def}\) ) at 5%. Consequently, the number of defective products ( \(N_{def}\) ) during the work shift is calculated as \(N_{unit} \cdot \rho _{def}\) , where \(N_{unit}\) represents the average number of units (brake calipers) assembled on the production line, defined as:

To quantify defective units identified, we consider the inspection accuracy ( \(\rho _{acc}\) ), where for human visual inspection, the typical accuracy is 80% (Sundaram & Zeid, 2023 ), and for the ML-based station, we use the accuracy of our best model, i.e., 99%. Additionally, we account for the probability of the station mistakenly identifying a caliper as with a defect even if it is defect-free, i.e., the false negative rate ( \(\rho _{FN}\) ), defined as

In the absence of any reasonable evidence to justify a bias on one mistake over others, we assume a uniform distribution for both human and automated inspections regarding error preference, i.e. we set \(\rho ^{H}_{FN} = \rho ^{ML}_{FN} = \rho _{FN} = 50\%\) . Thus, the number of final compliant goods ( \(N_{goods}\) ), i.e., the calipers that are identified as quality-compliant, can be calculated as:

where \(N_{detect}\) is the total number of detected defective units, comprising TN (true negatives, i.e. correctly identified defective calipers) and FN (false negatives, i.e. calipers mistakenly identified as defect-free). The Quality factor can then be computed as:

Table  4 summarizes the Quality factor calculation, showcasing the substantial improvement brought by the ML-based inspection station due to its higher accuracy compared to human operators.

figure 8

Overall Equipment Effectiveness (OEE) analysis for three scenarios (S1: Human-Based Inspection, S2: Hybrid Inspection, S3: Fully Automated Inspection). The height of the bars represents the percentage of the three factors A : Availability, P : Performance, and Q : Quality, which can be interpreted from the left axis. The green bars indicate the OEE value, derived from the product of these three factors. The red line shows the recall rate, i.e. the probability that a defective product is rejected by the client, with values displayed on the right red axis

Finally, we can determine the Overall Equipment Effectiveness by multiplying the three factors previously computed. Additionally, we can estimate the recall rate ( \(\rho _{R}\) ), which reflects the rate at which a customer might reject products. This is derived from the difference between the total number of defective units, \(N_{def}\) , and the number of units correctly identified as defective, TN , indicating the potential for defective brake calipers that may bypass the inspection process. In Fig.  8 we summarize the outcomes of the three scenarios. It is crucial to note that the scenarios incorporating the automated defect detector, S2 and S3, significantly enhance the Overall Equipment Effectiveness, primarily through substantial improvements in the Quality factor. Among these, the fully automated inspection scenario, S3, emerges as a slightly superior option, thanks to its additional benefit in removing the breaks and increasing the speed of the line. However, given the different assumptions required for this OEE study, we shall interpret these results as illustrative, and considering them primarily as comparative with the baseline scenario only. To analyze the sensitivity of the outlined scenarios on the adopted assumptions, we investigate the influence of the line speed and human accuracy on the results in the next subsection.

Sensitivity analysis

The scenarios described previously are illustrative and based on several simplifying hypotheses. One of such hypotheses is that the production chain layout operates entirely in series, with each station awaiting the arrival of the workpiece from the preceding station, resulting in a relatively slow production rate (1263 seconds). This setup can be quite different from reality, where slower operations can be accelerated by installing additional machines in parallel to balance the workload and enhance productivity. Moreover, we utilized a literature value of 80% for the accuracy of the human visual inspector operator, as reported by Sundaram and Zeid ( 2023 ). However, this accuracy can vary significantly due to factors such as the experience of the inspector and the defect type.

figure 9

Effect of assembly time for stations (excluding visual inspection), \(t^*_{line}\) , and human inspection accuracy, \(\rho _{acc}\) , on the OEE analysis. (a) The subplot shows the difference between the scenario S2 (Hybrid Inspection) and the baseline scenario S1 (Human Inspection), while subplot (b) displays the difference between scenario S3 (Fully Automated Inspection) and the baseline. The maps indicate in red the values of \(t^*_{line}\) and \(\rho _{acc}\) where the integration of automated inspection stations can significantly improve OEE, and in blue where it may lower the score. The dashed lines denote the breakeven points, and the circled points pinpoint the values of the scenarios used in the “Illustrative scenario” Subsection.

A sensitivity analysis on these two factors was conducted to address these variations. The assembly time of the stations (excluding visual inspection), \(t^*_{line}\) , was varied from 60 s to 1500 s, and the human inspection accuracy, \(\rho _{acc}\) , ranged from 50% (akin to a random guesser) to 100% (representing an ideal visual inspector); meanwhile, the other variables were kept fixed.

The comparison of the OEE enhancement for the two scenarios employing ML-based inspection against the baseline scenario is displayed in the two maps in Fig.  9 . As the figure shows, due to the high accuracy and rapid response of the proposed automated inspection station, the area representing regions where the process may benefit energy savings in the assembly lines (indicated in red shades) is significantly larger than the areas where its introduction could degrade performance (indicated in blue shades). However, it can be also observed that the automated inspection could be superfluous or even detrimental in those scenarios where human accuracy and assembly speed are very high, indicating an already highly accurate workflow. In these cases, and particularly for very fast production lines, short times for quality control can be expected to be key (beyond accuracy) for the optimization.

Finally, it is important to remark that the blue region (areas below the dashed break-even lines) might expand if the accuracy of the neural networks for defect detection is lower when implemented in an real production line. This indicates the necessity for new rounds of active learning and an augment of the ratio of real images in the database, to eventually enhance the performance of the ML model.

Conclusions

Industrial quality control processes on manufactured parts are typically achieved by human visual inspection. This usually requires a dedicated handling system, and generally results in a slower production rate, with the associated non-optimal use of the energy resources. Based on a practical test case for quality control on brake caliper manufacturing, in this work we have reported on a developed workflow for integration of Machine Learning methods to automatize the process. The proposed approach relies on image analysis via Deep Convolutional Neural Networks. These models allow to efficiently extract information from images, thus possibly representing a valuable alternative to human inspection.

The proposed workflow relies on a two-step procedure on the images of the brake calipers: first, the background is removed from the image; second, the geometry is inspected to identify possible defects. These two steps are accomplished thanks to two dedicated neural network models, an encoder-decoder and an encoder network, respectively. Training of these neural networks typically requires a large number of representative images for the problem. Given that, one such database is not always readily available, we have presented and discussed an alternative methodology for the generation of the input database using 3D renderings. While integration of the database with real photographs was required for optimal results, this approach has allowed fast and flexible generation of a large base of representative images. The pre-processing steps required for data feeding to the neural networks and their training has been also discussed.

Several models have been tested and evaluated, and the best one for the considered case identified. The obtained accuracy for defect identification reaches \(\sim \) 99% of the tested cases. Moreover, the response of the models is fast (in the order of few seconds) on each image, which makes them compliant with the most typical industrial expectations.

In order to provide a practical example of possible energy savings when implementing the proposed ML-based methodology for quality control, we have analyzed three perspective industrial scenarios: a baseline scenario, where quality control tasks are performed by a human inspector; a hybrid scenario, where the proposed ML automatic detection tool assists the human inspector; a fully-automated scenario, where we envision a completely automated defect inspection. The results show that the proposed tools may help increasing the Overall Equipment Effectiveness up to \(\sim \) 10% with respect to the considered baseline scenario. However, a sensitivity analysis on the speed of the production line and on the accuracy of the human inspector has also shown that the automated inspection could be superfluous or even detrimental in those cases where human accuracy and assembly speed are very high. In these cases, reducing the time required for quality control can be expected to be the major controlling parameter (beyond accuracy) for optimization.

Overall the results show that, with a proper tuning, these models may represent a valuable resource for integration into production lines, with positive outcomes on the overall effectiveness, and thus ultimately leading to a better use of the energy resources. To this, while the practical implementation of the proposed tools can be expected to require contained investments (e.g. a portable camera, a dedicated workstation and an operator with proper training), in field tests on a real industrial line would be required to confirm the potential of the proposed technology.

Agrawal, R., Majumdar, A., Kumar, A., & Luthra, S. (2023). Integration of artificial intelligence in sustainable manufacturing: Current status and future opportunities. Operations Management Research, 1–22.

Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M. A., Al-Amidie, M., & Farhan, L. (2021). Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions. Journal of big Data, 8 , 1–74.

Article   Google Scholar  

Angelopoulos, A., Michailidis, E. T., Nomikos, N., Trakadas, P., Hatziefremidis, A., Voliotis, S., & Zahariadis, T. (2019). Tackling faults in the industry 4.0 era-a survey of machine—learning solutions and key aspects. Sensors, 20 (1), 109.

Arana-Landín, G., Uriarte-Gallastegi, N., Landeta-Manzano, B., & Laskurain-Iturbe, I. (2023). The contribution of lean management—industry 4.0 technologies to improving energy efficiency. Energies, 16 (5), 2124.

Badmos, O., Kopp, A., Bernthaler, T., & Schneider, G. (2020). Image-based defect detection in lithium-ion battery electrode using convolutional neural networks. Journal of Intelligent Manufacturing, 31 , 885–897. https://doi.org/10.1007/s10845-019-01484-x

Banko, M., & Brill, E. (2001). Scaling to very very large corpora for natural language disambiguation. In Proceedings of the 39th annual meeting of the association for computational linguistics (pp. 26–33).

Benedetti, M., Bonfà, F., Introna, V., Santolamazza, A., & Ubertini, S. (2019). Real time energy performance control for industrial compressed air systems: Methodology and applications. Energies, 12 (20), 3935.

Bhatt, D., Patel, C., Talsania, H., Patel, J., Vaghela, R., Pandya, S., Modi, K., & Ghayvat, H. (2021). Cnn variants for computer vision: History, architecture, application, challenges and future scope. Electronics, 10 (20), 2470.

Bilgen, S. (2014). Structure and environmental impact of global energy consumption. Renewable and Sustainable Energy Reviews, 38 , 890–902.

Blender. (2023). Open-source software. https://www.blender.org/ . Accessed 18 Apr 2023.

Bologna, A., Fasano, M., Bergamasco, L., Morciano, M., Bersani, F., Asinari, P., Meucci, L., & Chiavazzo, E. (2020). Techno-economic analysis of a solar thermal plant for large-scale water pasteurization. Applied Sciences, 10 (14), 4771.

Burduk, A., & Górnicka, D. (2017). Reduction of waste through reorganization of the component shipment logistics. Research in Logistics & Production, 7 (2), 77–90. https://doi.org/10.21008/j.2083-4950.2017.7.2.2

Carvalho, T. P., Soares, F. A., Vita, R., Francisco, R., d. P., Basto, J. P., & Alcalá, S. G. (2019). A systematic literature review of machine learning methods applied to predictive maintenance. Computers & Industrial Engineering, 137 , 106024.

Casini, M., De Angelis, P., Chiavazzo, E., & Bergamasco, L. (2024). Current trends on the use of deep learning methods for image analysis in energy applications. Energy and AI, 15 , 100330. https://doi.org/10.1016/j.egyai.2023.100330

Chai, J., Zeng, H., Li, A., & Ngai, E. W. (2021). Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Machine Learning with Applications, 6 , 100134.

Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H. (2018). Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV) (pp. 801–818).

Chen, L., Li, S., Bai, Q., Yang, J., Jiang, S., & Miao, Y. (2021). Review of image classification algorithms based on convolutional neural networks. Remote Sensing, 13 (22), 4712.

Chen, T., Sampath, V., May, M. C., Shan, S., Jorg, O. J., Aguilar Martín, J. J., Stamer, F., Fantoni, G., Tosello, G., & Calaon, M. (2023). Machine learning in manufacturing towards industry 4.0: From ‘for now’to ‘four-know’. Applied Sciences, 13 (3), 1903. https://doi.org/10.3390/app13031903

Choudhury, A. (2021). The role of machine learning algorithms in materials science: A state of art review on industry 4.0. Archives of Computational Methods in Engineering, 28 (5), 3361–3381. https://doi.org/10.1007/s11831-020-09503-4

Dalzochio, J., Kunst, R., Pignaton, E., Binotto, A., Sanyal, S., Favilla, J., & Barbosa, J. (2020). Machine learning and reasoning for predictive maintenance in industry 4.0: Current status and challenges. Computers in Industry, 123 , 103298.

Fasano, M., Bergamasco, L., Lombardo, A., Zanini, M., Chiavazzo, E., & Asinari, P. (2019). Water/ethanol and 13x zeolite pairs for long-term thermal energy storage at ambient pressure. Frontiers in Energy Research, 7 , 148.

Géron, A. (2022). Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow . O’Reilly Media, Inc.

GrabCAD. (2023). Brake caliper 3D model by Mitulkumar Sakariya from the GrabCAD free library (non-commercial public use). https://grabcad.com/library/brake-caliper-19 . Accessed 18 Apr 2023.

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).

Ho, S., Zhang, W., Young, W., Buchholz, M., Al Jufout, S., Dajani, K., Bian, L., & Mozumdar, M. (2021). Dlam: Deep learning based real-time porosity prediction for additive manufacturing using thermal images of the melt pool. IEEE Access, 9 , 115100–115114. https://doi.org/10.1109/ACCESS.2021.3105362

Ismail, M. I., Yunus, N. A., & Hashim, H. (2021). Integration of solar heating systems for low-temperature heat demand in food processing industry-a review. Renewable and Sustainable Energy Reviews, 147 , 111192.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521 (7553), 436–444.

Leong, W. D., Teng, S. Y., How, B. S., Ngan, S. L., Abd Rahman, A., Tan, C. P., Ponnambalam, S., & Lam, H. L. (2020). Enhancing the adaptability: Lean and green strategy towards the industry revolution 4.0. Journal of cleaner production, 273 , 122870.

Liu, Z., Wang, X., Zhang, Q., & Huang, C. (2019). Empirical mode decomposition based hybrid ensemble model for electrical energy consumption forecasting of the cement grinding process. Measurement, 138 , 314–324.

Li, G., & Zheng, X. (2016). Thermal energy storage system integration forms for a sustainable future. Renewable and Sustainable Energy Reviews, 62 , 736–757.

Maggiore, S., Realini, A., Zagano, C., & Bazzocchi, F. (2021). Energy efficiency in industry 4.0: Assessing the potential of industry 4.0 to achieve 2030 decarbonisation targets. International Journal of Energy Production and Management, 6 (4), 371–381.

Mazzei, D., & Ramjattan, R. (2022). Machine learning for industry 4.0: A systematic review using deep learning-based topic modelling. Sensors, 22 (22), 8641.

Md, A. Q., Jha, K., Haneef, S., Sivaraman, A. K., & Tee, K. F. (2022). A review on data-driven quality prediction in the production process with machine learning for industry 4.0. Processes, 10 (10), 1966. https://doi.org/10.3390/pr10101966

Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., & Terzopoulos, D. (2021). Image segmentation using deep learning: A survey. IEEE transactions on pattern analysis and machine intelligence, 44 (7), 3523–3542.

Google Scholar  

Mishra, S., Srivastava, R., Muhammad, A., Amit, A., Chiavazzo, E., Fasano, M., & Asinari, P. (2023). The impact of physicochemical features of carbon electrodes on the capacitive performance of supercapacitors: a machine learning approach. Scientific Reports, 13 (1), 6494. https://doi.org/10.1038/s41598-023-33524-1

Mumuni, A., & Mumuni, F. (2022). Data augmentation: A comprehensive survey of modern approaches. Array, 16 , 100258. https://doi.org/10.1016/j.array.2022.100258

Mypati, O., Mukherjee, A., Mishra, D., Pal, S. K., Chakrabarti, P. P., & Pal, A. (2023). A critical review on applications of artificial intelligence in manufacturing. Artificial Intelligence Review, 56 (Suppl 1), 661–768.

Narciso, D. A., & Martins, F. (2020). Application of machine learning tools for energy efficiency in industry: A review. Energy Reports, 6 , 1181–1199.

Nota, G., Nota, F. D., Peluso, D., & Toro Lazo, A. (2020). Energy efficiency in industry 4.0: The case of batch production processes. Sustainability, 12 (16), 6631. https://doi.org/10.3390/su12166631

Ocampo-Martinez, C., et al. (2019). Energy efficiency in discrete-manufacturing systems: Insights, trends, and control strategies. Journal of Manufacturing Systems, 52 , 131–145.

Pan, Y., Hao, L., He, J., Ding, K., Yu, Q., & Wang, Y. (2024). Deep convolutional neural network based on self-distillation for tool wear recognition. Engineering Applications of Artificial Intelligence, 132 , 107851.

Qin, J., Liu, Y., Grosvenor, R., Lacan, F., & Jiang, Z. (2020). Deep learning-driven particle swarm optimisation for additive manufacturing energy optimisation. Journal of Cleaner Production, 245 , 118702.

Rahul, M., & Chiddarwar, S. S. (2023). Integrating virtual twin and deep neural networks for efficient and energy-aware robotic deburring in industry 4.0. International Journal of Precision Engineering and Manufacturing, 24 (9), 1517–1534.

Ribezzo, A., Falciani, G., Bergamasco, L., Fasano, M., & Chiavazzo, E. (2022). An overview on the use of additives and preparation procedure in phase change materials for thermal energy storage with a focus on long term applications. Journal of Energy Storage, 53 , 105140.

Shahin, M., Chen, F. F., Hosseinzadeh, A., Bouzary, H., & Shahin, A. (2023). Waste reduction via image classification algorithms: Beyond the human eye with an ai-based vision. International Journal of Production Research, 1–19.

Shen, F., Zhao, L., Du, W., Zhong, W., & Qian, F. (2020). Large-scale industrial energy systems optimization under uncertainty: A data-driven robust optimization approach. Applied Energy, 259 , 114199.

Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 .

Sundaram, S., & Zeid, A. (2023). Artificial Intelligence-Based Smart Quality Inspection for Manufacturing. Micromachines, 14 (3), 570. https://doi.org/10.3390/mi14030570

Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. (2017). Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI conference on artificial intelligence (vol. 31).

Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1–9).

Trezza, G., Bergamasco, L., Fasano, M., & Chiavazzo, E. (2022). Minimal crystallographic descriptors of sorption properties in hypothetical mofs and role in sequential learning optimization. npj Computational Materials, 8 (1), 123. https://doi.org/10.1038/s41524-022-00806-7

Vater, J., Schamberger, P., Knoll, A., & Winkle, D. (2019). Fault classification and correction based on convolutional neural networks exemplified by laser welding of hairpin windings. In 2019 9th International Electric Drives Production Conference (EDPC) (pp. 1–8). IEEE.

Wen, L., Li, X., Gao, L., & Zhang, Y. (2017). A new convolutional neural network-based data-driven fault diagnosis method. IEEE Transactions on Industrial Electronics, 65 (7), 5990–5998. https://doi.org/10.1109/TIE.2017.2774777

Willenbacher, M., Scholten, J., & Wohlgemuth, V. (2021). Machine learning for optimization of energy and plastic consumption in the production of thermoplastic parts in sme. Sustainability, 13 (12), 6800.

Zhang, X. H., Zhu, Q. X., He, Y. L., & Xu, Y. (2018). Energy modeling using an effective latent variable based functional link learning machine. Energy, 162 , 883–891.

Download references

Acknowledgements

This work has been supported by GEFIT S.p.a.

Open access funding provided by Politecnico di Torino within the CRUI-CARE Agreement.

Author information

Authors and affiliations.

Department of Energy, Politecnico di Torino, Turin, Italy

Mattia Casini, Paolo De Angelis, Paolo Vigo, Matteo Fasano, Eliodoro Chiavazzo & Luca Bergamasco

R &D Department, GEFIT S.p.a., Alessandria, Italy

Marco Porrati

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Luca Bergamasco .

Ethics declarations

Conflict of interest statement.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 354 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Casini, M., De Angelis, P., Porrati, M. et al. Machine Learning and image analysis towards improved energy management in Industry 4.0: a practical case study on quality control. Energy Efficiency 17 , 48 (2024). https://doi.org/10.1007/s12053-024-10228-7

Download citation

Received : 22 July 2023

Accepted : 28 April 2024

Published : 13 May 2024

DOI : https://doi.org/10.1007/s12053-024-10228-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Industry 4.0
  • Energy management
  • Artificial intelligence
  • Machine learning
  • Deep learning
  • Convolutional neural networks
  • Computer vision
  • Find a journal
  • Publish with us
  • Track your research

Business Wire

Release Summary

ISG has recognized 47 technology and business services providers for their best-in-class engagements as part of the ISG Case Study Research program.

  • #ISGCaseStudyResearch
  • #BusinessServicesProviders
  • #ITServiceProviders

Social Media Profiles

  • Connect with us on Facebook
  • Connect with us on LinkedIn
  • Connect with us on YouTube
  • Open access
  • Published: 15 May 2024

Learning together for better health using an evidence-based Learning Health System framework: a case study in stroke

  • Helena Teede 1 , 2   na1 ,
  • Dominique A. Cadilhac 3 , 4   na1 ,
  • Tara Purvis 3 ,
  • Monique F. Kilkenny 3 , 4 ,
  • Bruce C.V. Campbell 4 , 5 , 6 ,
  • Coralie English 7 ,
  • Alison Johnson 2 ,
  • Emily Callander 1 ,
  • Rohan S. Grimley 8 , 9 ,
  • Christopher Levi 10 ,
  • Sandy Middleton 11 , 12 ,
  • Kelvin Hill 13 &
  • Joanne Enticott   ORCID: orcid.org/0000-0002-4480-5690 1  

BMC Medicine volume  22 , Article number:  198 ( 2024 ) Cite this article

2 Altmetric

Metrics details

In the context of expanding digital health tools, the health system is ready for Learning Health System (LHS) models. These models, with proper governance and stakeholder engagement, enable the integration of digital infrastructure to provide feedback to all relevant parties including clinicians and consumers on performance against best practice standards, as well as fostering innovation and aligning healthcare with patient needs. The LHS literature primarily includes opinion or consensus-based frameworks and lacks validation or evidence of benefit. Our aim was to outline a rigorously codesigned, evidence-based LHS framework and present a national case study of an LHS-aligned national stroke program that has delivered clinical benefit.

Current core components of a LHS involve capturing evidence from communities and stakeholders (quadrant 1), integrating evidence from research findings (quadrant 2), leveraging evidence from data and practice (quadrant 3), and generating evidence from implementation (quadrant 4) for iterative system-level improvement. The Australian Stroke program was selected as the case study as it provides an exemplar of how an iterative LHS works in practice at a national level encompassing and integrating evidence from all four LHS quadrants. Using this case study, we demonstrate how to apply evidence-based processes to healthcare improvement and embed real-world research for optimising healthcare improvement. We emphasize the transition from research as an endpoint, to research as an enabler and a solution for impact in healthcare improvement.

Conclusions

The Australian Stroke program has nationally improved stroke care since 2007, showcasing the value of integrated LHS-aligned approaches for tangible impact on outcomes. This LHS case study is a practical example for other health conditions and settings to follow suit.

Peer Review reports

Internationally, health systems are facing a crisis, driven by an ageing population, increasing complexity, multi-morbidity, rapidly advancing health technology and rising costs that threaten sustainability and mandate transformation and improvement [ 1 , 2 ]. Although research has generated solutions to healthcare challenges, and the advent of big data and digital health holds great promise, entrenched siloes and poor integration of knowledge generation, knowledge implementation and healthcare delivery between stakeholders, curtails momentum towards, and consistent attainment of, evidence-and value-based care [ 3 ]. This is compounded by the short supply of research and innovation leadership within the healthcare sector, and poorly integrated and often inaccessible health data systems, which have crippled the potential to deliver on digital-driven innovation [ 4 ]. Current approaches to healthcare improvement are also often isolated with limited sustainability, scale-up and impact [ 5 ].

Evidence suggests that integration and partnership across academic and healthcare delivery stakeholders are key to progress, including those with lived experience and their families (referred to here as consumers and community), diverse disciplines (both research and clinical), policy makers and funders. Utilization of evidence from research and evidence from practice including data from routine care, supported by implementation research, are key to sustainably embedding improvement and optimising health care and outcomes. A strategy to achieve this integration is through the Learning Health System (LHS) (Fig.  1 ) [ 2 , 6 , 7 , 8 ]. Although there are numerous publications on LHS approaches [ 9 , 10 , 11 , 12 ], many focus on research perspectives and data, most do not demonstrate tangible healthcare improvement or better health outcomes. [ 6 ]

figure 1

Monash Learning Health System: The Learn Together for Better Health Framework developed by Monash Partners and Monash University (from Enticott et al. 2021 [ 7 ]). Four evidence quadrants: Q1 (orange) is evidence from stakeholders; Q2 (green) is evidence from research; Q3 (light blue) is evidence from data; and, Q4 (dark blue) is evidence from implementation and healthcare improvement

In developed nations, it has been estimated that 60% of care provided aligns with the evidence base, 30% is low value and 10% is potentially harmful [ 13 ]. In some areas, clinical advances have been rapid and research and evidence have paved the way for dramatic improvement in outcomes, mandating rapid implementation of evidence into healthcare (e.g. polio and COVID-19 vaccines). However, healthcare improvement is challenging and slow [ 5 ]. Health systems are highly complex in their design, networks and interacting components, and change is difficult to enact, sustain and scale up. [ 3 ] New effective strategies are needed to meet community needs and deliver evidence-based and value-based care, which reorients care from serving the provider, services and system, towards serving community needs, based on evidence and quality. It goes beyond cost to encompass patient and provider experience, quality care and outcomes, efficiency and sustainability [ 2 , 6 ].

The costs of stroke care are expected to rise rapidly in the next decades, unless improvements in stroke care to reduce the disabling effects of strokes can be successfully developed and implemented [ 14 ]. Here, we briefly describe the Monash LHS framework (Fig.  1 ) [ 2 , 6 , 7 ] and outline an exemplar case in order to demonstrate how to apply evidence-based processes to healthcare improvement and embed real-world research for optimising healthcare. The Australian LHS exemplar in stroke care has driven nationwide improvement in stroke care since 2007.

An evidence-based Learning Health System framework

In Australia, members of this author group (HT, AJ, JE) have rigorously co-developed an evidence-based LHS framework, known simply as the Monash LHS [ 7 ]. The Monash LHS was designed to support sustainable, iterative and continuous robust benefit of improved clinical outcomes. It was created with national engagement in order to be applicable to Australian settings. Through this rigorous approach, core LHS principles and components have been established (Fig.  1 ). Evidence shows that people/workforce, culture, standards, governance and resources were all key to an effective LHS [ 2 , 6 ]. Culture is vital including trust, transparency, partnership and co-design. Key processes include legally compliant data sharing, linkage and governance, resources, and infrastructure [ 4 ]. The Monash LHS integrates disparate and often siloed stakeholders, infrastructure and expertise to ‘Learn Together for Better Health’ [ 7 ] (Fig.  1 ). This integrates (i) evidence from community and stakeholders including priority areas and outcomes; (ii) evidence from research and guidelines; (iii) evidence from practice (from data) with advanced analytics and benchmarking; and (iv) evidence from implementation science and health economics. Importantly, it starts with the problem and priorities of key stakeholders including the community, health professionals and services and creates an iterative learning system to address these. The following case study was chosen as it is an exemplar of how a Monash LHS-aligned national stroke program has delivered clinical benefit.

Australian Stroke Learning Health System

Internationally, the application of LHS approaches in stroke has resulted in improved stroke care and outcomes [ 12 ]. For example, in Canada a sustained decrease in 30-day in-hospital mortality has been found commensurate with an increase in resources to establish the multifactorial stroke system intervention for stroke treatment and prevention [ 15 ]. Arguably, with rapid advances in evidence and in the context of an ageing population with high cost and care burden and substantive impacts on quality of life, stroke is an area with a need for rapid research translation into evidence-based and value-based healthcare improvement. However, a recent systematic review found that the existing literature had few comprehensive examples of LHS adoption [ 12 ]. Although healthcare improvement systems and approaches were described, less is known about patient-clinician and stakeholder engagement, governance and culture, or embedding of data informatics into everyday practice to inform and drive improvement [ 12 ]. For example, in a recent review of quality improvement collaborations, it was found that although clinical processes in stroke care are improved, their short-term nature means there is uncertainty about sustainability and impacts on patient outcomes [ 16 ]. Table  1 provides the main features of the Australian Stroke LHS based on the four core domains and eight elements of the Learning Together for Better Health Framework described in Fig.  1 . The features are further expanded on in the following sections.

Evidence from stakeholders (LHS quadrant 1, Fig.  1 )

Engagement, partners and priorities.

Within the stroke field, there have been various support mechanisms to facilitate an LHS approach including partnership and broad stakeholder engagement that includes clinical networks and policy makers from different jurisdictions. Since 2008, the Australian Stroke Coalition has been co-led by the Stroke Foundation, a charitable consumer advocacy organisation, and Stroke Society of Australasia a professional society with membership covering academics and multidisciplinary clinician networks, that are collectively working to improve stroke care ( https://australianstrokecoalition.org.au/ ). Surveys, focus groups and workshops have been used for identifying priorities from stakeholders. Recent agreed priorities have been to improve stroke care and strengthen the voice for stroke care at a national ( https://strokefoundation.org.au/ ) and international level ( https://www.world-stroke.org/news-and-blog/news/world-stroke-organization-tackle-gaps-in-access-to-quality-stroke-care ), as well as reduce duplication amongst stakeholders. This activity is built on a foundation and culture of research and innovation embedded within the stroke ‘community of practice’. Consumers, as people with lived experience of stroke are important members of the Australian Stroke Coalition, as well as representatives from different clinical colleges. Consumers also provide critical input to a range of LHS activities via the Stroke Foundation Consumer Council, Stroke Living Guidelines committees, and the Australian Stroke Clinical Registry (AuSCR) Steering Committee (described below).

Evidence from research (LHS quadrant 2, Fig.  1 )

Advancement of the evidence for stroke interventions and synthesis into clinical guidelines.

To implement best practice, it is crucial to distil the large volume of scientific and trial literature into actionable recommendations for clinicians to use in practice [ 24 ]. The first Australian clinical guidelines for acute stroke were produced in 2003 following the increasing evidence emerging for prevention interventions (e.g. carotid endarterectomy, blood pressure lowering), acute medical treatments (intravenous thrombolysis, aspirin within 48 h of ischemic stroke), and optimised hospital management (care in dedicated stroke units by a specialised and coordinated multidisciplinary team) [ 25 ]. Importantly, a number of the innovations were developed, researched and proven effective by key opinion leaders embedded in the Australian stroke care community. In 2005, the clinical guidelines for Stroke Rehabilitation and Recovery [ 26 ] were produced, with subsequent merged guidelines periodically updated. However, the traditional process of periodic guideline updates is challenging for end users when new research can render recommendations redundant and this lack of currency erodes stakeholder trust [ 27 ]. In response to this challenge the Stroke Foundation and Cochrane Australia entered a pioneering project to produce the first electronic ‘living’ guidelines globally [ 20 ]. Major shifts in the evidence for reperfusion therapies (e.g. extended time-window intravenous thrombolysis and endovascular clot retrieval), among other advances, were able to be converted into new recommendations, approved by the Australian National Health and Medical Research Council within a few months of publication. Feedback on this process confirmed the increased use and trust in the guidelines by clinicians. The process informed other living guidelines programs, including the successful COVID-19 clinical guidelines [ 28 ].

However, best practice clinical guideline recommendations are necessary but insufficient for healthcare improvement and nesting these within an LHS with stakeholder partnership, enables implementation via a range of proven methods, including audit and feedback strategies [ 29 ].

Evidence from data and practice (LHS quadrant 3, Fig.  1 )

Data systems and benchmarking : revealing the disparities in care between health services. A national system for standardized stroke data collection was established as the National Stroke Audit program in 2007 by the Stroke Foundation [ 30 ] following various state-level programs (e.g. New South Wales Audit) [ 31 ] to identify evidence-practice gaps and prioritise improvement efforts to increase access to stroke units and other acute treatments [ 32 ]. The Audit program alternates each year between acute (commencing in 2007) and rehabilitation in-patient services (commencing in 2008). The Audit program provides a ‘deep dive’ on the majority of recommendations in the clinical guidelines whereby participating hospitals provide audits of up to 40 consecutive patient medical records and respond to a survey about organizational resources to manage stroke. In 2009, the AuSCR was established to provide information on patients managed in acute hospitals based on a small subset of quality processes of care linked to benchmarked reports of performance (Fig.  2 ) [ 33 ]. In this way, the continuous collection of high-priority processes of stroke care could be regularly collected and reviewed to guide improvement to care [ 34 ]. Plus clinical quality registry programs within Australia have shown a meaningful return on investment attributed to enhanced survival, improvements in quality of life and avoided costs of treatment or hospital stay [ 35 ].

figure 2

Example performance report from the Australian Stroke Clinical Registry: average door-to-needle time in providing intravenous thrombolysis by different hospitals in 2021 [ 36 ]. Each bar in the figure represents a single hospital

The Australian Stroke Coalition endorsed the creation of an integrated technological solution for collecting data through a single portal for multiple programs in 2013. In 2015, the Stroke Foundation, AuSCR consortium, and other relevant groups cooperated to design an integrated data management platform (the Australian Stroke Data Tool) to reduce duplication of effort for hospital staff in the collection of overlapping variables in the same patients [ 19 ]. Importantly, a national data dictionary then provided the common data definitions to facilitate standardized data capture. Another important feature of AuSCR is the collection of patient-reported outcome surveys between 90 and 180 days after stroke, and annual linkage with national death records to ascertain survival status [ 33 ]. To support a LHS approach, hospitals that participate in AuSCR have access to a range of real-time performance reports. In efforts to minimize the burden of data collection in the AuSCR, interoperability approaches to import data directly from hospital or state-level managed stroke databases have been established (Fig.  3 ); however, the application has been variable and 41% of hospitals still manually enter all their data.

figure 3

Current status of automated data importing solutions in the Australian Stroke Clinical Registry, 2022, with ‘ n ’ representing the number of hospitals. AuSCR, Australian Stroke Clinical Registry; AuSDaT, Australian Stroke Data Tool; API, Application Programming Interface; ICD, International Classification of Diseases; RedCAP, Research Electronic Data Capture; eMR, electronic medical records

For acute stroke care, the Australian Commission on Quality and Safety in Health Care facilitated the co-design (clinicians, academics, consumers) and publication of the national Acute Stroke Clinical Care Standard in 2015 [ 17 ], and subsequent review [ 18 ]. The indicator set for the Acute Stroke Standard then informed the expansion of the minimum dataset for AuSCR so that hospitals could routinely track their performance. The national Audit program enabled hospitals not involved in the AuSCR to assess their performance every two years against the Acute Stroke Standard. Complementing these efforts, the Stroke Foundation, working with the sector, developed the Acute and Rehabilitation Stroke Services Frameworks to outline the principles, essential elements, models of care and staffing recommendations for stroke services ( https://informme.org.au/guidelines/national-stroke-services-frameworks ). The Frameworks are intended to guide where stroke services should be developed, and monitor their uptake with the organizational survey component of the Audit program.

Evidence from implementation and healthcare improvement (LHS quadrant 4, Fig.  1 )

Research to better utilize and augment data from registries through linkage [ 37 , 38 , 39 , 40 ] and to ensure presentation of hospital or service level data are understood by clinicians has ensured advancement in the field for the Australian Stroke LHS [ 41 ]. Importantly, greater insights into whole patient journeys, before and after a stroke, can now enable exploration of value-based care. The LHS and stroke data platform have enabled focused and time-limited projects to create a better understanding of the quality of care in acute or rehabilitation settings [ 22 , 42 , 43 ]. Within stroke, all the elements of an LHS culminate into the ready availability of benchmarked performance data and support for implementation of strategies to address gaps in care.

Implementation research to grow the evidence base for effective improvement interventions has also been a key pillar in the Australian context. These include multi-component implementation interventions to achieve behaviour change for particular aspects of stroke care, [ 22 , 23 , 44 , 45 ] and real-world approaches to augmenting access to hyperacute interventions in stroke through the use of technology and telehealth [ 46 , 47 , 48 , 49 ]. The evidence from these studies feeds into the living guidelines program and the data collection systems, such as the Audit program or AuSCR, which are then amended to ensure data aligns to recommended care. For example, the use of ‘hyperacute aspirin within the first 48 h of ischemic stroke’ was modified to be ‘hyperacute antiplatelet…’ to incorporate new evidence that other medications or combinations are appropriate to use. Additionally, new datasets have been developed to align with evidence such as the Fever, Sugar, and Swallow variables [ 42 ]. Evidence on improvements in access to best practice care from the acute Audit program [ 50 ] and AuSCR is emerging [ 36 ]. For example, between 2007 and 2017, the odds of receiving intravenous thrombolysis after ischemic stroke increased by 16% 9OR 1.06 95% CI 1.13–1.18) and being managed in a stroke unit by 18% (OR 1.18 95% CI 1.17–1.20). Over this period, the median length of hospital stay for all patients decreased from 6.3 days in 2007 to 5.0 days in 2017 [ 51 ]. When considering the number of additional patients who would receive treatment in 2017 in comparison to 2007 it was estimated that without this additional treatment, over 17,000 healthy years of life would be lost in 2017 (17,786 disability-adjusted life years) [ 51 ]. There is evidence on the cost-effectiveness of different system-focussed strategies to augment treatment access for acute ischemic stroke (e.g. Victorian Stroke Telemedicine program [ 52 ] and Melbourne Mobile Stroke Unit ambulance [ 53 ]). Reciprocally, evidence from the national Rehabilitation Audit, where the LHS approach has been less complete or embedded, has shown fewer areas of healthcare improvement over time [ 51 , 54 ].

Within the field of stroke in Australia, there is indirect evidence that the collective efforts that align to establishing the components of a LHS have had an impact. Overall, the age-standardised rate of stroke events has reduced by 27% between 2001 and 2020, from 169 to 124 events per 100,000 population. Substantial declines in mortality rates have been reported since 1980. Commensurate with national clinical guidelines being updated in 2007 and the first National Stroke Audit being undertaken in 2007, the mortality rates for men (37.4 deaths per 100,000) and women (36.1 deaths per 100,0000 has declined to 23.8 and 23.9 per 100,000, respectively in 2021 [ 55 ].

Underpinning the LHS with the integration of the four quadrants of evidence from stakeholders, research and guidelines, practice and implementation, and core LHS principles have been addressed. Leadership and governance have been important, and programs have been established to augment workforce training and capacity building in best practice professional development. Medical practitioners are able to undertake courses and mentoring through the Australasian Stroke Academy ( http://www.strokeacademy.com.au/ ) while nurses (and other health professionals) can access teaching modules in stroke care from the Acute Stroke Nurses Education Network ( https://asnen.org/ ). The Association of Neurovascular Clinicians offers distance-accessible education and certification to develop stroke expertise for interdisciplinary professionals, including advanced stroke co-ordinator certification ( www.anvc.org ). Consumer initiative interventions are also used in the design of the AuSCR Public Summary Annual reports (available at https://auscr.com.au/about/annual-reports/ ) and consumer-related resources related to the Living Guidelines ( https://enableme.org.au/resources ).

The important success factors and lessons from stroke as a national exemplar LHS in Australia include leadership, culture, workforce and resources integrated with (1) established and broad partnerships across the academic-clinical sector divide and stakeholder engagement; (2) the living guidelines program; (3) national data infrastructure, including a national data dictionary that provides the common data framework to support standardized data capture; (4) various implementation strategies including benchmarking and feedback as well as engagement strategies targeting different levels of the health system; and (5) implementation and improvement research to advance stroke systems of care and reduce unwarranted variation in practice (Fig.  1 ). Priority opportunities now include the advancement of interoperability with electronic medical records as an area all clinical quality registry’s programs needs to be addressed, as well as providing more dynamic and interactive data dashboards tailored to the need of clinicians and health service executives.

There is a clear mandate to optimise healthcare improvement with big data offering major opportunities for change. However, we have lacked the approaches to capture evidence from the community and stakeholders, to integrate evidence from research, to capture and leverage data or evidence from practice and to generate and build on evidence from implementation using iterative system-level improvement. The LHS provides this opportunity and is shown to deliver impact. Here, we have outlined the process applied to generate an evidence-based LHS and provide a leading exemplar in stroke care. This highlights the value of moving from single-focus isolated approaches/initiatives to healthcare improvement and the benefit of integration to deliver demonstrable outcomes for our funders and key stakeholders — our community. This work provides insight into strategies that can both apply evidence-based processes to healthcare improvement as well as implementing evidence-based practices into care, moving beyond research as an endpoint, to research as an enabler, underpinning delivery of better healthcare.

Availability of data and materials

Not applicable

Abbreviations

Australian Stroke Clinical Registry

Confidence interval

  • Learning Health System

World Health Organization. Delivering quality health services . OECD Publishing; 2018.

Enticott J, Braaf S, Johnson A, Jones A, Teede HJ. Leaders’ perspectives on learning health systems: A qualitative study. BMC Health Serv Res. 2020;20:1087.

Article   PubMed   PubMed Central   Google Scholar  

Melder A, Robinson T, McLoughlin I, Iedema R, Teede H. An overview of healthcare improvement: Unpacking the complexity for clinicians and managers in a learning health system. Intern Med J. 2020;50:1174–84.

Article   PubMed   Google Scholar  

Alberto IRI, Alberto NRI, Ghosh AK, Jain B, Jayakumar S, Martinez-Martin N, et al. The impact of commercial health datasets on medical research and health-care algorithms. Lancet Digit Health. 2023;5:e288–94.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Dixon-Woods M. How to improve healthcare improvement—an essay by Mary Dixon-Woods. BMJ. 2019;367: l5514.

Enticott J, Johnson A, Teede H. Learning health systems using data to drive healthcare improvement and impact: A systematic review. BMC Health Serv Res. 2021;21:200.

Enticott JC, Melder A, Johnson A, Jones A, Shaw T, Keech W, et al. A learning health system framework to operationalize health data to improve quality care: An Australian perspective. Front Med (Lausanne). 2021;8:730021.

Dammery G, Ellis LA, Churruca K, Mahadeva J, Lopez F, Carrigan A, et al. The journey to a learning health system in primary care: A qualitative case study utilising an embedded research approach. BMC Prim Care. 2023;24:22.

Foley T, Horwitz L, Zahran R. The learning healthcare project: Realising the potential of learning health systems. 2021. Available from https://learninghealthcareproject.org/wp-content/uploads/2021/05/LHS2021report.pdf . Accessed Jan 2024.

Institute of Medicine. Best care at lower cost: The path to continuously learning health care in America. Washington: The National Academies Press; 2013.

Google Scholar  

Zurynski Y, Smith CL, Vedovi A, Ellis LA, Knaggs G, Meulenbroeks I, et al. Mapping the learning health system: A scoping review of current evidence - a white paper. 2020:63

Cadilhac DA, Bravata DM, Bettger J, Mikulik R, Norrving B, Uvere E, et al. Stroke learning health systems: A topical narrative review with case examples. Stroke. 2023;54:1148–59.

Braithwaite J, Glasziou P, Westbrook J. The three numbers you need to know about healthcare: The 60–30-10 challenge. BMC Med. 2020;18:1–8.

Article   Google Scholar  

King D, Wittenberg R, Patel A, Quayyum Z, Berdunov V, Knapp M. The future incidence, prevalence and costs of stroke in the UK. Age Ageing. 2020;49:277–82.

Ganesh A, Lindsay P, Fang J, Kapral MK, Cote R, Joiner I, et al. Integrated systems of stroke care and reduction in 30-day mortality: A retrospective analysis. Neurology. 2016;86:898–904.

Lowther HJ, Harrison J, Hill JE, Gaskins NJ, Lazo KC, Clegg AJ, et al. The effectiveness of quality improvement collaboratives in improving stroke care and the facilitators and barriers to their implementation: A systematic review. Implement Sci. 2021;16:16.

Australian Commission on Safety and Quality in Health Care. Acute stroke clinical care standard. 2015. Available from https://www.safetyandquality.gov.au/our-work/clinical-care-standards/acute-stroke-clinical-care-standard . Accessed Jan 2024.

Australian Commission on Safety and Quality in Health Care. Acute stroke clinical care standard. Sydney: ACSQHC; 2019. Available from https://www.safetyandquality.gov.au/publications-and-resources/resource-library/acute-stroke-clinical-care-standard-evidence-sources . Accessed Jan 2024.

Ryan O, Ghuliani J, Grabsch B, Hill K, G CC, Breen S, et al. Development, implementation, and evaluation of the Australian Stroke Data Tool (AuSDaT): Comprehensive data capturing for multiple uses. Health Inf Manag. 2022:18333583221117184.

English C, Bayley M, Hill K, Langhorne P, Molag M, Ranta A, et al. Bringing stroke clinical guidelines to life. Int J Stroke. 2019;14:337–9.

English C, Hill K, Cadilhac DA, Hackett ML, Lannin NA, Middleton S, et al. Living clinical guidelines for stroke: Updates, challenges and opportunities. Med J Aust. 2022;216:510–4.

Cadilhac DA, Grimley R, Kilkenny MF, Andrew NE, Lannin NA, Hill K, et al. Multicenter, prospective, controlled, before-and-after, quality improvement study (Stroke123) of acute stroke care. Stroke. 2019;50:1525–30.

Cadilhac DA, Marion V, Andrew NE, Breen SJ, Grabsch B, Purvis T, et al. A stepped-wedge cluster-randomized trial to improve adherence to evidence-based practices for acute stroke management. Jt Comm J Qual Patient Saf. 2022.

Elliott J, Lawrence R, Minx JC, Oladapo OT, Ravaud P, Jeppesen BT, et al. Decision makers need constantly updated evidence synthesis. Nature. 2021;600:383–5.

Article   CAS   PubMed   Google Scholar  

National Stroke Foundation. National guidelines for acute stroke management. Melbourne: National Stroke Foundation; 2003.

National Stroke Foundation. Clinical guidelines for stroke rehabilitation and recovery. Melbourne: National Stroke Foundation; 2005.

Phan TG, Thrift A, Cadilhac D, Srikanth V. A plea for the use of systematic review methodology when writing guidelines and timely publication of guidelines. Intern Med J . 2012;42:1369–1371; author reply 1371–1362

Tendal B, Vogel JP, McDonald S, Norris S, Cumpston M, White H, et al. Weekly updates of national living evidence-based guidelines: Methods for the Australian living guidelines for care of people with COVID-19. J Clin Epidemiol. 2021;131:11–21.

Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7:50.

Harris D, Cadilhac D, Hankey GJ, Hillier S, Kilkenny M, Lalor E. National stroke audit: The Australian experience. Clin Audit. 2010;2:25–31.

Cadilhac DA, Purvis T, Kilkenny MF, Longworth M, Mohr K, Pollack M, et al. Evaluation of rural stroke services: Does implementation of coordinators and pathways improve care in rural hospitals? Stroke. 2013;44:2848–53.

Cadilhac DA, Moss KM, Price CJ, Lannin NA, Lim JY, Anderson CS. Pathways to enhancing the quality of stroke care through national data monitoring systems for hospitals. Med J Aust. 2013;199:650–1.

Cadilhac DA, Lannin NA, Anderson CS, Levi CR, Faux S, Price C, et al. Protocol and pilot data for establishing the Australian Stroke Clinical Registry. Int J Stroke. 2010;5:217–26.

Ivers N, Jamtvedt G, Flottorp S, Young J, Odgaard-Jensen J, French S, et al. Audit and feedback: Effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev . 2012

Australian Commission on Safety and Quality in Health Care. Economic evaluation of clinical quality registries. Final report. . 2016:79

Cadilhac DA, Dalli LL, Morrison J, Lester M, Paice K, Moss K, et al. The Australian Stroke Clinical Registry annual report 2021. Melbourne; 2022. Available from https://auscr.com.au/about/annual-reports/ . Accessed 6 May 2024.

Kilkenny MF, Kim J, Andrew NE, Sundararajan V, Thrift AG, Katzenellenbogen JM, et al. Maximising data value and avoiding data waste: A validation study in stroke research. Med J Aust. 2019;210:27–31.

Eliakundu AL, Smith K, Kilkenny MF, Kim J, Bagot KL, Andrew E, et al. Linking data from the Australian Stroke Clinical Registry with ambulance and emergency administrative data in Victoria. Inquiry. 2022;59:469580221102200.

PubMed   Google Scholar  

Andrew NE, Kim J, Cadilhac DA, Sundararajan V, Thrift AG, Churilov L, et al. Protocol for evaluation of enhanced models of primary care in the management of stroke and other chronic disease (PRECISE): A data linkage healthcare evaluation study. Int J Popul Data Sci. 2019;4:1097.

CAS   PubMed   PubMed Central   Google Scholar  

Mosalski S, Shiner CT, Lannin NA, Cadilhac DA, Faux SG, Kim J, et al. Increased relative functional gain and improved stroke outcomes: A linked registry study of the impact of rehabilitation. J Stroke Cerebrovasc Dis. 2021;30: 106015.

Ryan OF, Hancock SL, Marion V, Kelly P, Kilkenny MF, Clissold B, et al. Feedback of aggregate patient-reported outcomes (PROs) data to clinicians and hospital end users: Findings from an Australian codesign workshop process. BMJ Open. 2022;12:e055999.

Grimley RS, Rosbergen IC, Gustafsson L, Horton E, Green T, Cadigan G, et al. Dose and setting of rehabilitation received after stroke in Queensland, Australia: A prospective cohort study. Clin Rehabil. 2020;34:812–23.

Purvis T, Middleton S, Craig LE, Kilkenny MF, Dale S, Hill K, et al. Inclusion of a care bundle for fever, hyperglycaemia and swallow management in a national audit for acute stroke: Evidence of upscale and spread. Implement Sci. 2019;14:87.

Middleton S, McElduff P, Ward J, Grimshaw JM, Dale S, D’Este C, et al. Implementation of evidence-based treatment protocols to manage fever, hyperglycaemia, and swallowing dysfunction in acute stroke (QASC): A cluster randomised controlled trial. Lancet. 2011;378:1699–706.

Middleton S, Dale S, Cheung NW, Cadilhac DA, Grimshaw JM, Levi C, et al. Nurse-initiated acute stroke care in emergency departments. Stroke. 2019:STROKEAHA118020701.

Hood RJ, Maltby S, Keynes A, Kluge MG, Nalivaiko E, Ryan A, et al. Development and pilot implementation of TACTICS VR: A virtual reality-based stroke management workflow training application and training framework. Front Neurol. 2021;12:665808.

Bladin CF, Kim J, Bagot KL, Vu M, Moloczij N, Denisenko S, et al. Improving acute stroke care in regional hospitals: Clinical evaluation of the Victorian Stroke Telemedicine program. Med J Aust. 2020;212:371–7.

Bladin CF, Bagot KL, Vu M, Kim J, Bernard S, Smith K, et al. Real-world, feasibility study to investigate the use of a multidisciplinary app (Pulsara) to improve prehospital communication and timelines for acute stroke/STEMI care. BMJ Open. 2022;12:e052332.

Zhao H, Coote S, Easton D, Langenberg F, Stephenson M, Smith K, et al. Melbourne mobile stroke unit and reperfusion therapy: Greater clinical impact of thrombectomy than thrombolysis. Stroke. 2020;51:922–30.

Purvis T, Cadilhac DA, Hill K, Reyneke M, Olaiya MT, Dalli LL, et al. Twenty years of monitoring acute stroke care in Australia from the national stroke audit program (1999–2019): Achievements and areas of future focus. J Health Serv Res Policy. 2023.

Cadilhac DA, Purvis T, Reyneke M, Dalli LL, Kim J, Kilkenny MF. Evaluation of the national stroke audit program: 20-year report. Melbourne; 2019.

Kim J, Tan E, Gao L, Moodie M, Dewey HM, Bagot KL, et al. Cost-effectiveness of the Victorian Stroke Telemedicine program. Aust Health Rev. 2022;46:294–301.

Kim J, Easton D, Zhao H, Coote S, Sookram G, Smith K, et al. Economic evaluation of the Melbourne mobile stroke unit. Int J Stroke. 2021;16:466–75.

Stroke Foundation. National stroke audit – rehabilitation services report 2020. Melbourne; 2020.

Australian Institute of Health and Welfare. Heart, stroke and vascular disease: Australian facts. 2023. Webpage https://www.aihw.gov.au/reports/heart-stroke-vascular-diseases/hsvd-facts/contents/about (accessed Jan 2024).

Download references

Acknowledgements

The following authors hold National Health and Medical Research Council Research Fellowships: HT (#2009326), DAC (#1154273), SM (#1196352), MFK Future Leader Research Fellowship (National Heart Foundation #105737). The Funders of this work did not have any direct role in the design of the study, its execution, analyses, interpretation of the data, or decision to submit results for publication.

Author information

Helena Teede and Dominique A. Cadilhac contributed equally.

Authors and Affiliations

Monash Centre for Health Research and Implementation, 43-51 Kanooka Grove, Clayton, VIC, Australia

Helena Teede, Emily Callander & Joanne Enticott

Monash Partners Academic Health Science Centre, 43-51 Kanooka Grove, Clayton, VIC, Australia

Helena Teede & Alison Johnson

Stroke and Ageing Research, Department of Medicine, School of Clinical Sciences at Monash Health, Monash University, Level 2 Monash University Research, Victorian Heart Hospital, 631 Blackburn Rd, Clayton, VIC, Australia

Dominique A. Cadilhac, Tara Purvis & Monique F. Kilkenny

Stroke Theme, The Florey Institute of Neuroscience and Mental Health, University of Melbourne, Heidelberg, VIC, Australia

Dominique A. Cadilhac, Monique F. Kilkenny & Bruce C.V. Campbell

Department of Neurology, Melbourne Brain Centre, Royal Melbourne Hospital, Parkville, VIC, Australia

Bruce C.V. Campbell

Department of Medicine, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Victoria, Australia

School of Health Sciences, Heart and Stroke Program, University of Newcastle, Hunter Medical Research Institute, University Drive, Callaghan, NSW, Australia

Coralie English

School of Medicine and Dentistry, Griffith University, Birtinya, QLD, Australia

Rohan S. Grimley

Clinical Excellence Division, Queensland Health, Brisbane, Australia

John Hunter Hospital, Hunter New England Local Health District and University of Newcastle, Sydney, NSW, Australia

Christopher Levi

School of Nursing, Midwifery and Paramedicine, Australian Catholic University, Sydney, NSW, Australia

Sandy Middleton

Nursing Research Institute, St Vincent’s Health Network Sydney and and Australian Catholic University, Sydney, NSW, Australia

Stroke Foundation, Level 7, 461 Bourke St, Melbourne, VIC, Australia

Kelvin Hill

You can also search for this author in PubMed   Google Scholar

Contributions

HT: conception, design and initial draft, developed the theoretical formalism for learning health system framework, approved the submitted version. DAC: conception, design and initial draft, provided essential literature and case study examples, approved the submitted version. TP: revised the manuscript critically for important intellectual content, approved the submitted version. MFK: revised the manuscript critically for important intellectual content, provided essential literature and case study examples, approved the submitted version. BC: revised the manuscript critically for important intellectual content, provided essential literature and case study examples, approved the submitted version. CE: revised the manuscript critically for important intellectual content, provided essential literature and case study examples, approved the submitted version. AJ: conception, design and initial draft, developed the theoretical formalism for learning health system framework, approved the submitted version. EC: revised the manuscript critically for important intellectual content, approved the submitted version. RSG: revised the manuscript critically for important intellectual content, provided essential literature and case study examples, approved the submitted version. CL: revised the manuscript critically for important intellectual content, provided essential literature and case study examples, approved the submitted version. SM: revised the manuscript critically for important intellectual content, provided essential literature and case study examples, approved the submitted version. KH: revised the manuscript critically for important intellectual content, provided essential literature and case study examples, approved the submitted version. JE: conception, design and initial draft, developed the theoretical formalism for learning health system framework, approved the submitted version. All authors read and approved the final manuscript.

Authors’ Twitter handles

@HelenaTeede

@DominiqueCad

@Coralie_English

@EmilyCallander

@EnticottJo

Corresponding authors

Correspondence to Helena Teede or Dominique A. Cadilhac .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests, additional information, publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Teede, H., Cadilhac, D.A., Purvis, T. et al. Learning together for better health using an evidence-based Learning Health System framework: a case study in stroke. BMC Med 22 , 198 (2024). https://doi.org/10.1186/s12916-024-03416-w

Download citation

Received : 23 July 2023

Accepted : 30 April 2024

Published : 15 May 2024

DOI : https://doi.org/10.1186/s12916-024-03416-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence-based medicine
  • Person-centred care
  • Models of care
  • Healthcare improvement

BMC Medicine

ISSN: 1741-7015

forest are best case study

COMMENTS

  1. Forests are the Best-Case Studies for Economic Excellence

    Forests are the Best-Case Studies for Economic Excellence. 21 Sep 2022. 10 min read. The ecosystem of India's forests has a lot to teach the nation's economy. An ecosystem includes both the economy and the forest. While the economy is a man-made system with the flow of money, supply chain, and distribution, the forest is a natural ecosystem ...

  2. Forests are the best case studies for economic excellence

    This essay explores how economic systems can embody and operate on the virtues that forests possess, along with a few case studies that demonstrate the excellence an economy can achieve by emulating forests. Firstly, forests are characterized by their resilience—the ability to recover and bounce back from challenging situations.

  3. Forests are the Best-Case Studies for Economic Excellence

    Forests are the Best-Case Studies for Economic Excellence | 21 Sep 2022. The ecosystem of India's forests has a lot to teach the nation's economy. An ecosystem includes both the economy and the forest. While the economy is a man-made system with the flow of money, supply chain, and distribution, the forest is a natural ecosystem with ...

  4. PDF Background Analytical Study Forests, inclusive and sustainable economic

    The studies are: (a) Forests and climate change; (b) Forests, inclusive and sustainable economic growth and employment; and (c) Forests, peaceful and inclusive societies,

  5. A global analysis of the social and environmental outcomes of ...

    Recent analyses have sought to assess livelihood and forest outcomes of CFM interventions across a number of case studies or at a national scale 9,10,11,12,13,14, but these studies provide only ...

  6. Investing in Forests: The Business Case

    Investing in Forests: The Business Case. Download PDF. Forest destruction and degradation is accelerating the severe climate and nature crises facing the world. Halting business practices that contribute to this degradation is a vital priority and investment in forest conservation and restoration is urgently needed. Investing in forests fulfils ...

  7. Economic Contributions from Conserved Forests: Four Case Studies of the

    The Forest Legacy Program (FLP) is administered by the USDA Forest Service to protect historic forest uses and intact working forest landscapes. This study quantified economic activities on FLP land in four areas to assess how these activities contribute to the economy of the multistate region in which the projects are located.

  8. Forest-linked livelihoods in a globalized world

    Understanding how the five trends noted above affect forests and livelihoods will require expanding substantially on household- and community-level case studies (or collections of case studies) to ...

  9. The 2019-2020 Australian forest fires are a harbinger of ...

    In each case study landscape, 1000 ignition locations were selected based on an empirical model developed and tested for similar forest types 56. Individual fires were ignited at 11:00 h local ...

  10. Working Woods: A Case Study of Sustainable Forest Management on Vermont

    Sustainable management of their woodlots could provide social and economic benefits for generations. We examined sustainable forest management across four counties in Vermont by evaluating the use of silvicultural practices and best management practices on 59 recently harvested, family-owned properties with at least 25 acres of timberland.

  11. Restoring Tropical Forests: Lessons Learned from Case Studies on Three

    Where restoration sites are close to forest remnants, the framework species method works well. Framework tree species may be planted to complement ANR in small nuclei (case study 1), in larger plots (case study 2) or to form wildlife corridors (case study 3), depending on local ecological and economic conditions.

  12. Diversifying Forest Landscape Management—A Case Study of a Shift from

    Natural forests have many ecological, economic and other values, and sustaining them is a challenge for policy makers and forest managers. Conventional approaches to forest management such as those based on maximum sustained yield principles disregard fundamental tenets of ecological sustainability and often fail. Here we describe the failure of a highly regulated approach to forest management ...

  13. Urban Forests Case Studies: Challenges, Potential and Success in a

    There are many challenges facing cities in the 21st century: aging gray infrastructures, social and economic inequality, maxed out systems and grids, extensive urban development. With more than 80 percent of the U.S. population now calling urban areas home, finding solutions to these issues that fit within a city's budgetary constraints, while also enhancing the city for the better, is of ...

  14. Case Study: The Amazon Rainforest

    Tropical rainforests are often considered to be the "cradles of biodiversity.". Though they cover only about 6% of the Earth's land surface, they are home to over 50% of global biodiversity. Rainforests also take in massive amounts of carbon dioxide and release oxygen through photosynthesis, which has also given them the nickname "lungs ...

  15. Case Studies

    18. Case Studies on Forest Biological Diversity - New Zealand. New Zealand. Spellerberg, I.F., and Sawyer, J.W.D. PDF (0) 19. Case Study on the Ecosystem Approach to Sustainable Forest Management - Conservation and Use of Forest Genetic Resources in the UK. United Kingdom of Great Britain and Northern Ireland.

  16. Case studies

    The case studies range in size from small woodlands of three hectares to large forests of nearly 70,000 hectares, and many of the forests can be visited as indicated on the map. Click on the map to enlarge. 1. Climate-ready forestry at Queen Elizabeth Forest Park.

  17. Case Study

    A highly diverse, and therefore healthy, forest is more resilient to environmental change, such as invasive species and catastrophic fire, as well as the threats of global climate change and human encroachment. The forests of Maine are among the most diverse in North America. They include 14 conifer, or cone bearing, and 52 deciduous, or ...

  18. Case Studies

    Share your experience. Replication of successful approaches and learning lessons from other forest practitioners can make conservation work more impactful. Take a few minutes to capture and share your experience and tips. Insights and lessons from practitioners on conserving forests for nature and people.

  19. PDF Learning from existing projects

    A review was undertaken to explore barriers and opportunities to large-scale nature restoration and rewilding projects and help identify what works and issues encountered. The review included ten case studies in the UK and Norway in rural and coastal contexts. Key findings. The case studies showed a range of motivations for large-scale nature ...

  20. PDF Indiana University Bloomington

    Found. Redirecting to https://dlc.dlib.indiana.edu/dlcrest/api/core/bitstreams/8c4b5d9d-49ff-4fed-9ace-f176f8766519/content

  21. Case studies

    Case studies. Real-life examples of where green infrastructure has been used to provide social, economic, environmental and ecological benefits. ... Turning the Chicago Urban forest climate project into action on the ground (PDF-101K) Improving air quality ... Best practice guidance for Land Regeneration; Case studies; Evidence notes;

  22. Community-led Forest Conservation and Restoration in India

    Community-led Forest Conservation and Restoration in India. Case studies on the impact of rights recognition on ecological and socio-economic outcomes.

  23. Insights Ias

    Insights Weekly Essay Challenges 2022 - Week 92- Forests are the best case studies for economic excellence. Insights Weekly Essay Challenges 2022 - Week 92. Archives. 18 September 2022. Write an essay on the following topic in not more than 1000-1200 words:

  24. Forests

    Forests play a crucial role in South Korea's carbon neutrality goal and require sustainable management strategies to overcome age-class imbalances. The Generic Carbon Budget Model (GCBM) offers a spatially explicit approach to simulate carbon dynamics at a regional scale. In this study, we utilized the GCBM to analyze the carbon budget of forests in South Korea and produce spatiotemporal ...

  25. The feasibility of adding wood quality traits as selection ...

    Pinus pinaster is a very important species for the Galician wood industry. A genetic breeding program was started in the 1980s to select plus trees based on growth and straightness. In this study, we estimated genetic parameters, juvenile-mature correlations and genetic gains in basic density (BD) and the dynamic modulus of elasticity (MOEd) in Galician breeding families, as well as their ...

  26. Machine Learning and image analysis towards improved energy ...

    With the advent of Industry 4.0, Artificial Intelligence (AI) has created a favorable environment for the digitalization of manufacturing and processing, helping industries to automate and optimize operations. In this work, we focus on a practical case study of a brake caliper quality control operation, which is usually accomplished by human inspection and requires a dedicated handling system ...

  27. 15 Real-Life Case Study Examples & Best Practices

    15 Real-Life Case Study Examples. Now that you understand what a case study is, let's look at real-life case study examples. In this section, we'll explore SaaS, marketing, sales, product and business case study examples with solutions. Take note of how these companies structured their case studies and included the key elements.

  28. ISG Case Study Research Recognizes 47 Providers for High-Impact Client

    Contacts. Will Thoretz, ISG. +1 203 517 3119. [email protected]. Julianna Sheridan, Matter Communications for ISG. +1 978 518 4520. [email protected]. ISG has recognized 47 technology and ...

  29. Spatiotemporal Variation, Meteorological Driving Factors, and ...

    During the study period, lake area averages were from 0.009 km2 to 506.497 km2, with standard deviations ranging from 0.003 km2 to 184.372 km2. ... Linear model is only best for 4 cases. The random forest model provides the best fit due to its ability to handle a large number of feature variables and consider their interactions, thereby ...

  30. Learning together for better health using an evidence-based Learning

    In developed nations, it has been estimated that 60% of care provided aligns with the evidence base, 30% is low value and 10% is potentially harmful [].In some areas, clinical advances have been rapid and research and evidence have paved the way for dramatic improvement in outcomes, mandating rapid implementation of evidence into healthcare (e.g. polio and COVID-19 vaccines).