• Reference Manager
  • Simple TEXT file

People also looked at

Review article, use of new approach methodologies (nams) to meet regulatory requirements for the assessment of industrial chemicals and pesticides for effects on human health.

www.frontiersin.org

  • 1 PETA Science Consortium International e.V., Stuttgart, Germany
  • 2 Safe Environments Directorate, Healthy Environments and Consumer Safety Branch, Health Canada, Ottawa, ON, Canada
  • 3 Pest Management Regulatory Agency, Health Canada, Ottawa, ON, Canada
  • 4 Corteva Agriscience, Indianapolis, IN, United States
  • 5 Office of Pollution Prevention and Toxics, US Environmental Protection Agency, Washington, DC, United States
  • 6 Scientific and Regulatory Affairs, JT International SA, Geneva, Switzerland
  • 7 Bergeson & Campbell PC, Washington, DC, United States
  • 8 Office of Pesticide Programs, US Environmental Protection Agency, Washington, DC, United States

New approach methodologies (NAMs) are increasingly being used for regulatory decision making by agencies worldwide because of their potential to reliably and efficiently produce information that is fit for purpose while reducing animal use. This article summarizes the ability to use NAMs for the assessment of human health effects of industrial chemicals and pesticides within the United States, Canada, and European Union regulatory frameworks. While all regulations include some flexibility to allow for the use of NAMs, the implementation of this flexibility varies across product type and regulatory scheme. This article provides an overview of various agencies’ guidelines and strategic plans on the use of NAMs, and specific examples of the successful application of NAMs to meet regulatory requirements. It also summarizes intra- and inter-agency collaborations that strengthen scientific, regulatory, and public confidence in NAMs, thereby fostering their global use as reliable and relevant tools for toxicological evaluations. Ultimately, understanding the current regulatory landscape helps inform the scientific community on the steps needed to further advance timely uptake of approaches that best protect human health and the environment.

1 Introduction

Regulatory agencies are tasked with ensuring protection of human health and the environment, and implementing various processes for achieving this goal. Legal frameworks that do not require upfront toxicological testing have relied heavily on chemical evaluations using analogue read across and grouping based on chemical categories, while others with upfront testing requirements have relied on prescribed checklists of toxicity tests, often using animals to fulfill the required testing. However, scientific advancements have led to investments in the development, implementation, and acceptance of reliable and relevant new approach methodologies (NAMs). NAMs are defined as any technology, methodology, approach, or combination that can provide information on chemical hazard and risk assessment without the use of animals, including in silico , in chemico , in vitro , and ex vivo approaches ( ECHA, 2016b ; EPA, 2018d ). NAMs are not necessarily newly developed methods, rather, it is their application to regulatory decision making or replacement of a conventional testing requirement that is new.

Regulatory agencies worldwide have recognized the importance of the timely uptake of fit for purpose NAMs for hazard and risk assessment and are introducing flexible, efficient, and scientifically sound processes to establish confidence in the use of NAMs for regulatory decision-making ( van der Zalm et al., 2022 ; Ingenbleek et al., 2020 ). The use of NAMs has been prioritized because of their ability to efficiently generate information that, once established to be as or more reliable and relevant than the conventional testing requirement, may be used to make regulatory decisions that protect human health. NAMs can mimic human biology and provide mechanistic information about how a chemical may cause toxicity in humans. They can also be used to inform population variability, for example, by rapidly identifying susceptible subpopulations from potential exposures in fence line communities or workers, and by allowing for the consideration of individualized health risks and the generation of data tailored to people with pre-existing conditions or those more sensitive to certain chemicals ( EPA, 2020e ).

This article describes opportunities for and examples of the use of NAMs in regulatory submissions for industrial chemicals and pesticides in the United States (US), Canada, and the European Union (EU). For industrial chemicals, it includes the US Environmental Protection Agency (EPA)’s Office of Pollution Prevention and Toxics (OPPT), the US Consumer Products Safety Commission (CPSC), Health Canada (HC)’s Healthy Environments and Consumer Safety Branch (HECSB), and the European Chemicals Agency (ECHA). For pesticides and plant protection products (PPP), it highlights the EPA’s Office of Pesticide Programs (OPP), HC’s Pest Management Regulatory Agency (PMRA), and the European Food Safety Authority (EFSA). This article also provides examples of collaborations, across sectors and borders, to build scientific, regulatory, and public confidence in the use of NAMs for the protection of human health, and to reach the ultimate goal of global acceptance. Tables 1 , 2 summarize some of the guidance, strategic plans, and other helpful documentation related to the implementation of NAMs. While this article addresses the assessment of human health effects of industrial chemicals and pesticides in the US, Canada, and the EU, similar collaborative efforts and opportunities to use NAMs in regulatory submissions exist in other sectors and countries. Furthermore, many of the discussed actions and efforts also likely apply to other types of chemicals and to ecotoxicological effects.

www.frontiersin.org

TABLE 1 . US, Canada, and EU: industrial chemicals and household products.

www.frontiersin.org

TABLE 2 . US, Canada, and EU: pesticides and plant protection products.

2 Overarching activities to advance the implementation of NAMs

2.1 international collaboration.

The Organisation for Economic Co-operation and Development (OECD) publishes guidelines for the assessment of chemical effects on human health and the environment. Under the mutual acceptance of data (MAD) agreement among the 38 OECD member countries, which aims to reduce duplicate testing, “…data generated in the testing of chemicals in an OECD member country in accordance with OECD Test Guidelines (TG) and OECD Principles of Good Laboratory Practice (GLP), shall be accepted in other member countries” ( OECD, 2019 ). A portion of the nearly 100 OECD test guidelines describe in chemico, in vitro, or ex vivo methods that are accepted by certain regulatory agencies for the testing of various types of chemicals. At their discretion, agencies can decide which OECD test guidelines to require and whether to accept non-OECD guideline methods ( OECD, 2019 ). Building toward regulatory implementation of non-guideline methods, parallel OECD efforts are working to advance the development of best practices, guidance, data integration and evaluation frameworks such as Integrated Approaches to Testing and Assessment (IATA) and Adverse Outcome Pathways (AOPs).

The International Cooperation on Alternative Test Methods (ICATM) was originally established in 2009 by the US Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM), HC, the EU Reference Laboratory for alternatives to animal testing (EURL ECVAM), and the Japanese Center for the Validation of Alternative Methods (JaCVAM) to facilitate cooperation among national validation organizations. Since its establishment, Korea (KoCVAM) has signed the agreement and China, Brazil (BraCVAM), and Taiwan participate in ICATM activities. In 2019, Canada established the Canadian Centre for the Validation of Alternative Methods (CaCVAM). Each group works in-country and collaboratively to advance NAMs. For example, the Tracking System for Alternative Methods (TSAR), an overview of non-animal methods that have been proposed for regulatory safety or efficacy testing of chemicals or biological agents, was established and provided by EURL ECVAM ( EURL ECVAM, n.d. ).

In 2016, ECHA organized a workshop on NAMs in Regulatory Science, which was attended by 300 stakeholders to discuss the use of NAMs for regulatory decision making ( ECHA, 2016b ). Since 2016, EPA, HC, and ECHA have held workshops to discuss the development and application of NAMs for chemical assessment as part of an international government-to-government initiative titled “Accelerating the Pace of Chemical Risk Assessment” (APCRA) ( EPA, 2021a ). EPA and HC further collaborated through the North American Free Trade Agreement (NAFTA; in 2020, NAFTA was replaced by the US-Mexico-Canada Agreement (USMCA)) Technical Working Group (TWG) on Pesticides and through the Canada-US Regulatory Co-operation Council (RCC). The RCC was a regulatory partnership between the pesticides regulating department and offices of HC and EPA that has facilitated the alignment of both countries’ regulatory approaches, while advancing efforts to reduce and replace animal tests ( ITA, n.d. ; HC, 2020 ). The NAFTA TWG on Pesticides and the RCC included specific work plans and priority areas along with accountability for deliverables ( NAFTA TWG, 2016 ).

The development and implementation of NAMs within regulatory agencies relies heavily on collaboration with a variety of stakeholders, including other offices and departments within the same agency, other national and international agencies, as well as industry representatives, method developers, academics, and non-profit/non-governmental organizations. For example, within EPA, there is substantial cross-talk between OPP and OPPT (both of which are a part of the Office of Chemical Safety and Pollution Prevention (OCSPP)) as well as the Office of Research and Development (ORD). Agencies also consult with external peer-review panels, such as science advisory boards or committees, which provide independent scientific expertise on various topics. The exchange with external stakeholders provides diverse perspectives and experiences with different NAMs. Several of these collaborations have led to journal publications, presentations at national and international meetings, and webinars. For example, since 2018, EPA has partnered with PETA Science Consortium International e.V. and the Physicians Committee for Responsible Medicine to host a webinar series on the “Use of New Approach Methodologies (NAMs) in Risk Assessment” which brings together expert speakers and attendees from around the world to discuss the implementation of NAMs ( PSCI, n.d. ). EPA’s OCSPP and ORD also held conferences on the state of the science for using NAMs in 2019 and 2020 and are currently planning the next conference for October 2022 ( EPA, 2019a ; EPA, 2020b ).

2.2 National roadmaps or work plans to guide and facilitate the implementation of NAMs

2.2.1 united states.

Several US agencies have roadmaps or work plans to guide and facilitate the implementation of NAMs for testing industrial chemicals or pesticides. For example , following publication of the EPA-commissioned National Resource Council (NRC) report titled “Toxicity Testing in the 21st Century: A Vision and A Strategy” ( NRC, 2007 ), EPA released a strategic plan that provided a framework for implementing the NRC’s vision, which incorporates new approaches into toxicity testing and risk assessment practices with less reliance on conventional apical approaches ( EPA, 2009 ). Furthermore, in June 2020, EPA’s OCSPP and ORD published a NAM Work Plan (updated in December 2021) that describes primary objectives and strategies for reducing animal testing through the use of NAMs while ensuring protection of human health and the environment ( EPA, 2021e ). It highlights the importance of communicating, collaborating, providing training on NAMs, establishing confidence in NAMs, and developing metrics for assessing progress.

In 2018, the 16 US federal agencies that comprised ICCVAM (including EPA and CPSC) published a strategic roadmap to serve as a guide for agencies and stakeholders seeking to adopt NAMs for chemical safety and risk assessments ( ICCVAM, 2018 ). The ICCVAM strategic roadmap emphasizes three main components: 1) connecting agency and industry end users with NAM developers to ensure the needs of the end user will be met; 2) using efficient, flexible, and robust practices to establish confidence in NAMs and reducing reliance on using animal data to define NAM performance; and 3) encouraging the adoption and use of NAMs by federal agencies and regulated industries. A list of NAMs accepted by US agencies can be found on the website of the US National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM), which supports ICCVAM’s work ( NICEATM, 2021 ).

2.2.2 Canada

The PMRA’s 2016–2021 strategic plan notes how rapidly the regulatory environment is evolving through innovations in science and puts an onus on the Agency to evolve accordingly ( HC, 2016d ). The strategic plan includes drivers for evolution, the importance of public confidence, the vision, mission, and key principles of scientific excellence, innovation, openness and transparency, and organizational and workforce excellence. The plan further mentions strategic enablers, which include building upon PMRA’s success in establishing and maintaining effective partnerships with provinces, territories, and other stakeholders both domestically and internationally.

2.2.3 European Union

The EU is a political and economic union of 27 European countries (Member States) and its operation is guaranteed through various legal instruments. Unlike regulations and decisions that apply automatically and uniformly to all countries as soon as they enter into force, directives require Member States to achieve a certain result by transposing them into national law. In 2010, Directive 2010/63/EU on the protection of animals used for scientific purposes ( EU, 2010 ) was adopted to eliminate disparities between laws, regulations, and administrative provisions of the Member States regarding the protection of animals used for experimental and other scientific purposes. Article 4 states that “wherever possible, a scientifically satisfactory method or testing strategy, not entailing the use of live animals, shall be used instead of a procedure,” which applies to all research purposes including regulatory toxicity testing ( EU, 2010 ). Further, the directive lays the foundation for retrospective analyses of animal experiments, mutual acceptance of data, as well as the European Commission and Member States’ contribution to the development and validation of NAMs.

In October 2020, the EU Chemicals Strategy for Sustainability (CSS) Towards a Toxic-Free Environment was published ( EC, 2020 ). It identified a need to innovate safety testing and chemical risk assessment to reduce dependency on animal testing while improving the quality, efficiency, and speed of chemical hazard and risk assessments. However, fulfilling its additional information requirements will more likely lead to an increase in animals used. Also, it is currently unknown whether the implementation of the CSS will open opportunities for the application of more NAMs.

3 Industrial chemicals

3.1 united states.

In the US, industrial chemicals are subject to regulation under the Toxic Substances Control Act (TSCA). TSCA was originally signed into law (15 US Code [USC] §2601 et seq. ) on 11 October 1976 with the intent “[t]o regulate commerce and protect human health and the environment by requiring testing and necessary use restrictions on certain chemical substances, and for other purposes” (Pub. L. 94-469, Oct. 11, 1976). TSCA was significantly amended in 2016 (Pub. L. 114-182, 22 June 2016). EPA is responsible for implementing and administering TSCA (see 15 USC §2601(c)) and OPPT, within EPA’s OCSPP, carries out much of that work.

TSCA provides EPA the authority to regulate new and existing chemical substances under Sections 5 and 6 of TSCA, respectively. Existing chemical substances are those on the TSCA Inventory, either those that were in commerce prior to the enactment of TSCA and grandfathered in, or those that OPPT evaluated as new chemical substances and were subsequently introduced into commerce. Entities that wish to introduce a new chemical substance or an existing chemical substance with a significant new use into commerce must submit a notification to OPPT ( i.e. , pre-manufacture notice (PMN) or significant new use notice (SNUN)) or an appropriate exemption application, where an application is required for the exemption (e.g., low volume exemption), prior to manufacturing, including importing, the chemical substance.

Prior to the 2016 Amendments, when entities submitted a new chemical notification, no specific action by EPA was required. If EPA did not take regulatory action on the new chemical substance, the entity was allowed to manufacture the chemical substance at the expiration of the applicable review period (e.g., 90 days for a PMN). For existing chemicals, much of EPA’s TSCA activity was focused on data collection, including through section 8 rules and issuing test rules on chemical substances, including those identified by EPA’s interagency testing committee (ITC). The ITC was established under Section 4(e) of TSCA and was charged with identifying and recommending to the EPA Administrator chemical substances or mixtures that should be tested pursuant to Section 4(a) of TSCA to determine their hazard to human health or the environment. Although allowed by TSCA, EPA’s ability to regulate and restrict the use of existing chemical substances under Section 6 of TSCA was significantly impaired following a 1991 ruling by the US Court of Appeals for the Fifth Circuit ( Corrosion Proof Fittings vs. EPA , 974 F.2d 1201), which vacated much of EPA’s TSCA Section 6 rule to ban asbestos, a rule that EPA had first announced as an advanced notice of proposed rulemaking in 1979.

The above issues with TSCA—namely new chemical substances being automatically introduced into commerce if the “clock ran out” and EPA’s limited regulation of existing chemical substances under Section 6 of TSCA—garnered Congressional attention, which culminated on 22 June 2016. On that date, then-President Obama signed the Frank R. Lautenberg Chemical Safety for the 21st Century Act into law, thereby amending TSCA (Pub.L. 114-182, 2016). The TSCA amendments placed new requirements on EPA, including requirements to review and publish risk determinations on new chemical substances, prioritize existing chemical substances as either high- or low-priority substances, and perform risk evaluations on those chemical substances identified as high-priority substances. The TSCA amendments also included new requirements for EPA to comply with specific scientific standards for best available science and weight of the scientific evidence (WoE) under Sections 26(h)-(i) of TSCA when carrying out Sections 4, 5, and 6; a new requirement to reduce testing on vertebrate animals under Section 4(h) of TSCA; and a provision giving EPA the authority to require testing on existing chemical substances by order, rather than by rule, 1 under Section 4(a)(1) and (2) of TSCA.

The discussion that follows is focused on EPA’s authority under Section 4(h) to reduce testing on vertebrate animals, EPA’s use of this authority for new and existing chemical substances, and voluntary initiatives by the regulated community that have advanced the understanding and use of NAMs.

3.1.1 General requirements

TSCA does not contain upfront vertebrate toxicity testing requirements, which allows flexibility for the adoption of NAMs. Since the enactment of the TSCA amendments, EPA has used its authority to order testing on existing chemical substances, while meeting its requirements under Section 4(h) of TSCA ( EPA, 2022c ). Section 4(h) includes three primary provisions: (1) the aforementioned general requirements placed on EPA for reducing and replacing the use of vertebrate animals; (2) the requirements on EPA to promote the development and incorporation of alternative testing methods, including through the development of a strategic plan and a (non-exhaustive) list of NAMs identified by the EPA Administrator; and (3) the requirements on the regulated community to consider non-vertebrate testing methods when performing voluntary testing when EPA has identified an alternative test method or strategy to develop such information.

3.1.2 Regulatory flexibility

There are several sections of TSCA and the implementing regulations where EPA may use NAMs for informing its science and risk management decisions under TSCA. Data generated using NAMs may trigger reporting requirements on the regulated community. For example, under Section 8(e) of TSCA, it is possible that results generated using NAMs would trigger a reporting obligation for substantial risk for instance, if the data meet the requirements under one of EPA’s policies, such as in vitro skin sensitization data. In its “Strategic Plan to Promote the Development and Implementation of Alternative Test Methods Within the TSCA Program,” OPPT lists criteria that provide a starting point for considering the scientific reliability and relevance of NAMs ( EPA, 2018d ); however, it has yet to issue official guidance to the regulated community on its interpretation of the criteria for accepting NAMs, as meeting the scientific standards under Section 26(h) of TSCA. In addition, while OPPT has yet to issue official guidance on the criteria it uses to identify NAMs for inclusion on the list of methods approved by the EPA Administrator, the agency has presented a proposed nomination form, which provides some insight on EPA’s considerations ( Simmons and Scarano, 2020 ).

3.1.3 Implementation of NAMs

OPPT’s activities to implement NAMs have included issuing a “Strategic Plan to Promote the Development and Implementation of Alternative Test Methods Within the TSCA Program” ( EPA, 2018d ), establishing a list of approved NAMs ( EPA, 2018c ; EPA, 2019b ; EPA, 2021d ), and developing a draft policy allowing the use of NAMs for evaluating skin sensitization ( EPA, 2018b ). The latter is based on EPA’s participation in the development of the OECD guideline for Defined Approaches on Skin Sensitisation ( OECD, 2021a ). EPA has also performed significant outreach and collaboration to advance its understanding of NAMs, as well as educate the interested community about these technologies.

In March 2022, OPPT and ORD presented the TSCA new chemicals collaborative research effort for public comments ( EPA, 2022b ). This multi-year research action plan to bring innovative science to the review of new chemicals under TSCA includes: 1) refining chemical categories for read-across; 2) developing and expanding databases containing TSCA chemical information; 3) developing and refining Quantitative Structure-Activity Relationship (QSAR) and other predictive models; 4) exploring ways to apply NAMs in risk assessment; and 5) developing a decision support tool that will transparently integrate all data streams into a final risk assessment.

3.1.3.1 Examples of NAM application

Already prior to the 2016 amendments to TSCA, EPA had established numerous methods for assessing chemical substances. For example, EPA has been using structure-activity relationships (SAR) for assessing the potential of new chemical substances to cause harm to aquatic organisms and an expert system to estimate potential for carcinogenicity since the 1980s ( EPA, 1994 ).

In early 2021, OPPT issued test orders on nine existing chemical substances ( EPA, 2022c ). For each of the substances, OPPT ordered dermal absorption testing using an in vitro method validated by the OECD ( OECD, 2004 ) instead of animal testing. After consideration of existing scientific information, EPA determined that the in vitro method, which is included on its list of NAMs, could be used. While EPA required the in vitro testing on both human and animal skin, a report has since been published analyzing 30 agrochemical formulations, which supports the use of in vitro assays using human skin for human health risk assessment because they are as or more protective and are directly relevant to the species of interest ( Allen et al., 2021 ; EPA, 2021f ). In reviewing test plans or test data provided to be considered in lieu of the ordered testing, EPA consulted with the authors of Allen et al. (2021) and subsequently determined that it would be acceptable for the in vitro testing to be conducted on human skin only for the chemicals subject to these particular orders.

The interested community has also been actively developing robust NAMs that can be used for regulatory decision making. For example, an entity performed voluntary in chemico testing on a polymeric substance that OPPT had identified as a potential hazard. The substance was classified as a poorly soluble, low-toxicity substance that, if inhaled, may lead to adverse effects stemming from lung overload. OPPT issued a significant new use rule (SNUR) on this substance, which required any entity to notify EPA (submission of a SNUN) if the polymer is manufactured, processed, or used as a respirable particle (i.e., <10 μm) ( EPA, 2019c ). The SNUR listed potentially useful information for inclusion in a SNUN, which consisted of a 90-day subchronic inhalation toxicity study in rats. However, the entity voluntarily undertook an in chemico test in lieu of the in vivo toxicity study. The in chemico test showed the daily dissolution rate of the polymer in simulated epithelial lung fluid exceeded the anticipated daily exposure concentrations and was, therefore, not a hazard concern from lung overload. After evaluating these data, OPPT agreed with the results and issued a final rule revoking the SNUR ( EPA, 2020h ). These data were subsequently published in the peer-reviewed literature ( Ladics et al., 2021 ).

3.1.4 Consumer products

In addition to the regulation of individual chemical ingredients of household products under TSCA, the Federal Hazardous Substances Act (FHSA) requires appropriate cautionary labeling on certain hazardous household products to alert consumers to the potential hazard(s) that the products may present (15 USC §1261 et seq. ). However, the FHSA does not require manufacturers to perform any specific toxicological tests to assess potential hazards (e.g . , systemic toxicity, corrosivity, sensitization, or irritation). CPSC has the authority with administering FHSA. It issued guidance on the use of NAMs in 2021 ( CPSC, 2022 ). This document lays out what factors CPSC staff will use when evaluating NAMs, IATA, and any submitted data being used to support FHSA labeling determinations. CPSCS, 2012 Animal Testing Policy (16 Code of Federal Regulations [CFR] Part 1500) strongly encourages manufacturers to find alternatives to animal testing for assessing household products.

The Canadian Environmental Protection Act (CEPA, Statutes of Canada [SC] 1999, c.33) provides the legislative framework for industrial substances, including new chemical substances and those that are currently on the Canadian market (i.e., existing substances on the Domestic Substances List [DSL]), for the protection of the environment, for the well-being of Canadians, and to contribute to sustainable development. The Safe Environments Directorate in the HECSB of Health Canada and Environment and Climate Change Canada are jointly responsible for the regulation of industrial substances under the authority of CEPA.

Existing and new substances have different legal requirements under CEPA. Accordingly, based on respective program areas, the requirements for and use of traditional and NAMs data are considered in various decision contexts including screening, prioritization, and informing risk assessment decisions. Risk assessments consider various types and sources of information, as required or available for new or existing substances respectively, including physico-chemical properties, inherent hazard, biological characteristics, release scenarios, and routes of exposure to determine whether a substance is or may become harmful according to the criteria set out in section 64 of CEPA.

The Chemicals Management Plan (CMP) was introduced in 2006 to, in part, strengthen the integration of chemicals management programs across the Government of Canada ( HC, 2022e ). Key elements of the CMP have been addressing the priority existing chemicals from the DSL identified through Categorization for risk assessment pursuant to obligations under CEPA and the parallel pre-market assessments of new substances not on the DSL and notified through the New Substances Notification Regulations provisions made under CEPA.

Under the Existing Substances Risk Assessment Program (ESRAP), the approximate 4,300 priority substances were assessed over three phases (2006–2021), requiring the development of novel methodologies and assessment strategies to address data needs as the program evolved from a chemical-by-chemical approach to the assessment of groups and classes of chemicals ( HC, 2021b ). The limited empirical toxicity data available for many of the priority substances necessitated the implementation of fit-for-purpose approaches, including the use of computational tools and read-across. Further, the use of streamlined approaches ( HC, 2018 ) assisted the program to more efficiently address substances considered to be of low concern. Building on experiences and achievements from the CMP to date, the Government of Canada continues to expand on the vision for modernization. This shift takes into consideration new scientific information regarding chemicals to support innovative strategies for priority setting and to maintain a flexible, adaptive and fit-for-purpose approach to risk assessment to manage increasingly diverse and complex substances and mixtures ( HC, 2021b ; Bhuller et al., 2021 ).

The New Substances Program (NSP) is responsible for administering the New Substances Notification Regulations (NSNR, Statutory Orders and Regulations [SOR]/205-247 and SOR/2005-248) of CEPA ( HC, 2022f ). These regulations ensure that new substances (chemicals, polymers, biochemical, biopolymers, or living organisms) are not introduced into Canada before undergoing ecological and human health risk assessments, and that any appropriate or required control measures have been taken.

3.2.1 General requirements

Risk assessments conducted under CEPA use a WoE approach while also applying the precautionary principle. For existing substances on the DSL, there are no prescribed data requirements to inform the assessment of a substance to determine whether it is toxic or capable of becoming toxic as defined under Section 64 of CEPA. As such, an essential first step in the risk assessment process is the collection and review of a wide range of hazard and exposure information on each substance or group of substances from a variety of published and unpublished sources, stakeholders, and various databases ( HC, 2022d ).

The NSNR (Chemicals and Polymers) require information be submitted in a New Substances Notification (NSN) prior to import or manufacture of a new chemical, polymer, biochemical, or biopolymer in Canada. The NSNR (Chemicals and Polymers) also require that a notifier submit all other relevant data in their possession relevant to the assessment. Subsection 15(1) of the NSNR (Chemicals and Polymers) states that conditions and procedures used must be consistent with conditions and procedures set out in the OECD TG that are current at the time the test data are developed, and should comply with GLP.

Information in support of a NSN may be obtained from alternative test protocols, WoE, read-across, as well as from (Q)SARs [calculation or estimation methods (e.g., EPI Suite)]. The NSP may use various NAMs in their risk assessment, and may accept (and has accepted) test data which use NAMs, as discussed in further detail below.

3.2.2 Regulatory flexibility

For existing substances on the DSL under CEPA, there are no set submission requirements prior to an assessment, which inherently presents the need for flexibility and the opportunity to integrate novel approaches. NAM data are often used to support the assessment of the potential for risk from data poor substances. Since these data poor substances are unlikely to have required or available guideline studies, NAMs, including computational modelling, in vitro assays, QSAR and read-across, are used as approaches to address data needs offering an opportunity for a risk-based assessment where this may have been challenging in the past ( HC, 2022a ). For new substances, the NSP supports ongoing NAM development, as well as monitoring studies, to provide information on levels of substances of interest in the environment; both are used to fill risk assessment data gaps. In 2021, the NSP published a draft updated Guidance Document for the Notification and Testing of New Substances: Chemicals and Polymers ( HC, 2021c ). Section 8.4 of this Guidance Document lists examples of accepted test methods, which could in the future include NAMs as they are shown to be scientifically valid. Under the NSNR, alternative approaches will be acceptable when, in the opinion of the NSP, they are determined to provide a scientifically valid measure of the endpoint under investigation that is deemed sufficient for the purposes of the risk assessment. NAM data are evaluated on a case-by-case basis and can form part of the WoE of an assessment.

3.2.3 Implementation of NAMs

Given the paucity of data available for many substances on the market, as well as for new substances, there is a long history of using alternative approaches for hazard identification and characterization in support of new and existing substances risk assessment decisions. Over the last 2 decades, a variety of NAMs have been used by different program areas to address information gaps for risk assessment. The approaches implemented have been fit-for-purpose and largely determined by the data need, the timeline, the type of chemical(s), and the level of complexity associated with the assessment ( HC, 2016a ). Most notably for existing substances, in silico models, (Q)SAR, and read-across have been the most widely used methods with the progressive adoption and expanded use of computational toxicology and automated approaches ongoing for both ESRAP and the NSP. More specific details on the evolution of the ESRAP under CEPA are highlighted in the CMP Science Committee meeting report ( HC, 2021b ).

There are currently no formal criteria that have been published in order to achieve regulatory acceptance for the implementation of NAMs for existing substances in Canada. However, experience and efficiencies have been gained through the strategic development and implementation of streamlined risk-based approaches that support rapid and robust decision-making. To this end, a number of science approach documents (SciAD) have been published describing and demonstrating the implementation of NAMs to evaluate the potential for environmental or human health risk from industrial substances ( HC, 2022c ). SciADs are published under section 68 of CEPA, and do not include regulatory conclusions; however, the approach and results described within a SciAD may form the basis for a risk assessment conclusion when used in conjunction with any other relevant and available information. Furthermore, the implementation of NAMs as described in SciADs can also be used to support the identification of priorities for data gathering, data generation, further scoping, and risk assessment ( HC, 2022c ).

In advancing the vision for progressive chemicals management programs, which includes reduced use of animals and integration of NAMs, it is recognized that there is an ongoing need to develop flexible, adaptive, and innovative approaches. Accordingly, the ESRAP continues to expand the use of computational and in vitro models as well as evidence integration strategies to identify and address emerging priority substances. Key to successful implementation moving forward are the productive partnerships with the international regulatory and research communities to continue to build confidence and harmonization for the use of alternative test methods and strategies in chemical risk assessment ( Krewski et al., 2020 ; Bhuller et al., 2021 ).

Data generated using NAMs may be accepted to fulfil any of the NSNR’s test data requirements for an NSN when, in the opinion of the NSP, such data are determined to provide a scientifically valid measure of the endpoint under investigation that is deemed sufficient for the purposes of the risk assessment. The NSP will assess if the method has been satisfactorily validated in terms of scientific rigor, reproducibility, and predictability. Guidance is provided to notifiers who wish to submit information using NAMs during Pre-Notification Consultation meetings with NSP staff, or notifiers can consult Sections 5.4 and 8.4 of the respective Guidance Document ( HC, 2021c ). Alternative methods that may be accepted by the NSP to meet NSNR requirements include any internationally recognized and accepted test methods (e.g., in vitro skin irritation, gene mutation, and chromosomal aberration). Data such as (Q)SAR, read-across (greater than 80% structural similarity), and WoE may be accepted on a case-by-case basis.

3.2.3.1 Examples of NAM applications

As noted above, beyond the use of in silico models and read-across, examples of NAM applications for existing substances have been published as SciADs outlining NAM-based methods for prioritization and assessment ( HC, 2022c ). Specifically, the SciAD “Threshold of Toxicological Concern (TTC)-based Approach for Certain Substances” has been applied to evaluate a subset of existing substances on the DSL identified as priorities for assessment under subsection 73(1) of CEPA and/or were considered a priority based on human health concerns ( HC, 2016c ). More recently, the SciAD “Bioactivity exposure ratio: Application in priority setting and risk assessment approach” was developed outlining a quantitative risk-based approach to identify substances of greater potential concern or substances of low concern for human health ( HC, 2021f ). This proposed approach for NAM application builds on a broad retrospective analysis under the APCRA ( Paul Friedman et al., 2020 ) and considers high-throughput in vitro bioactivity together with high-throughput toxicokinetic modelling to derive an in vitro -based point of departure. As technologies continue to advance and additional sources of data from NAMs emerge, these may also be considered in the ongoing expansion of the approach to support the derivation of molecular-based PODs as part of a tiered testing scheme. Further work is underway to build approaches for the interpretation of transcriptomics data and to enhance the use of QSAR and machine learning to enrich evidence integration and WoE evaluation using IATA frameworks across toxicological endpoints of regulatory relevance.

New substances are inherently data-poor substances and, as a result, the NSP typically accepts a variety of alternative approaches and NAM data to meet data requirements under the NSNR. QSAR data and read-across data using analogues have historically been used to meet data requirements under the NSNR, particularly for physico-chemical data requirements or in combination with other data to provide a WoE for toxicity data. More recently, newly validated in vitro methods for skin irritation and skin sensitization ( OECD, 2021a ) have been accepted to meet data requirements under the NSNR. The NSP participates in active research programs to develop NAMs for complex endpoints, such as genotoxicity and systemic toxicity. Although not a regulatory requirement, in vitro eye irritation tests are also frequently received by the NSP.

3.3 European Union

In 2006, a significant number of updates and revisions were introduced into the EU chemicals policy with the introduction of Regulation (EC) No 1907/2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) ( EC, 2006 ). REACH entered into force on 1 June 2007, and introduced a single system for the regulation of chemicals, transferring the burden of proof concerning the risk assessment of substances from public authorities to companies. The purpose of REACH, according to Article 1(1), is to “ensure a high level of protection of human health and the environment, including the promotion of alternative methods for assessment of hazards of substances, as well as the free circulation of substances on the internal market while enhancing competitiveness and innovation” ( EC, 2006 ). The Regulation established ECHA to manage and implement the technical, scientific, and administrative aspects of REACH. Enforcement of REACH is each EU Member State’s responsibility and, therefore, ECHA has no direct enforcement responsibilities ( ECHA, n.d.a ). In addition to REACH, Regulation (EC) 1272/2008 on classification, labelling and packaging of substances and mixtures (CLP regulation) ( EC, 2008b ) was introduced to align the EU chemical hazard labeling system with the United Nations Economic Commission for Europe (UNECE)’s Globally Harmonised System of Classification and Labelling of Chemicals (GHS). Both REACH and CLP regulation are currently undergoing extensive revisions at the time of submission of this manuscript.

3.3.1 General requirements

REACH applies to all chemical substances; however, certain substances that are regulated by other legislations ( e.g. , biocides, PPPs, or medical drugs) may be (partially) exempted from specific requirements ( ECHA, n.d.f ). Substances used in cosmetic products remain a contentious issue with them being subject to an animal testing ban under the EU regulation on cosmetics products ( EC, 2009b ), yet ECHA continues to request new in vivo testing under certain circumstances such as for risk assessment for worker exposure ( ECHA, 2014 ). The interplay between the two regulations is under review by the European Court of Justice (‘Symrise v ECHA’ (2021), T655/20, ECLI:EU:T:2021:98 and ‘Symrise v ECHA' (2021), T-656/20, ECLI:EU:T:2021:99).

Whilst REACH is not a pre-marketing approval process in the strictest sense of the definition, it works on the principle of no data, no market with responsibility placed on registrants to manage the risks from chemicals and to provide safety information on the substances. Thus, companies bear the burden of proof to identify and manage the risks linked to the substances they manufacture or import and place on the market in the EU. They must demonstrate how the substance can be safely used and must communicate the risk management measures to the users. Companies must register the chemical substances they manufacture or import into the EU at more than one tonne per year with ECHA. The registration requirement under REACH “applies to substances on their own, in mixtures, or, in certain cases, in articles” ( ECHA, n.d.c) . Registration is governed by the “one substance, one registration” principle, where manufacturers and importers of the same substance must submit their registration jointly. Companies must collect information on the properties and uses of their substances and must assess both the hazards and potential risks presented by these substances. The companies compile all of this information in a registration dossier and submit it to ECHA. The standard information requirements for the registration dossier depends on the tonnage band of the chemical substance ( ECHA, n.d.b ). The information required is specified in Annexes VI to X of REACH and include physico-chemical data, toxicology information, and ecotoxicological information.

ECHA receives and evaluates individual registrations for their compliance ( ECHA, n.d.f ). EU Member States evaluate certain substances to clarify initial concerns for human health or for the environment. ECHA’s scientific committees assess whether any identified risks from a hazardous substance are manageable, or whether that substance must be banned. Before imposing a ban, authorities can also decide to restrict the use of a substance or make it subject to a prior authorization.

The CLP regulation requires that relevant information on the characteristics of a substance, classification of toxicity endpoints, and pertinent labelling of a substance or substances in mixtures be notified to ECHA when placed on the EU market ( EC, 2008b ). In this way, the toxicity classification and labeling of the substance are harmonized both for chemical hazard assessment and consumer risk. In cases where there are significant divergences of scientific opinion, further review of scientific data can proceed ( EC, 2008b ). New testing is normally not requested for CLP purposes alone unless all other means of generating information have been exhausted and data of adequate reliability and quality are not available ( ECHA, n.d.e ).

The discussion that follows is focused on the EU’s efforts under REACH to reduce testing on vertebrate animals to assess human health effects. This concept lies at the very foundation of REACH, which states in the second sentence of the Preamble that it should “promote the development of alternative methods for the assessment of hazards of substances” ( EC, 2006 ).

3.3.2 Regulatory flexibility

According to Article 13(1) of REACH, “for human toxicity, information shall be generated whenever possible by means other than vertebrate animal tests, through the use of alternative methods, for example, in vitro methods or qualitative or quantitative structure-activity relationship models or from information from structurally related substances (grouping or read-across)” ( EC, 2006 ). Further, according to Article 13(2), the European Commission may propose amendments to the REACH Annexes and the Commission Regulation, which lists approved test methods ( EC, 2008a ), to “replace, reduce or refine animal testing.” Under Title III of REACH, on Data Sharing and Avoidance of Unnecessary Testing, Article 25(1) requires that testing on vertebrate animals must be undertaken only as a last resort; however, the interpretation of Articles 13 and 25 of REACH are often matters of dispute in European Court of Justice (‘Federal Republic of Germany v Esso Raffinage’ (2021), C-471/18 P, ECLI:EU:C:2021:48), ECHA Board of Appeal ( e.g. , cases A-005-2011 and A-001–2014), and European Ombudsman cases (cases 1568/2012/(FOR)AN, 1606/2013/AN and 1130/2016/JAS).

In addition, to reduce animal testing and duplication of tests, study results from tests involving vertebrate animals should be shared between registrants ( EC, 2006 ). Furthermore, where a substance has been registered within the last 12 years, a potential new registrant must, according to Article 27, request from the previous registrant all information relating to vertebrate animal testing that is required for registration of the substance. Before the deadline to register all existing chemicals by 31 May 2018, companies (i.e., manufacturers, importers, or data owners) registering the same substance were legally required to form substance information exchange fora (SIEFs) to help exchange data and avoid duplication of testing for existing chemicals ( EC, 2006 ).

REACH standard information requirements for registration dossiers contain upfront testing requirements on vertebrate animals with some flexibility to allow the use of NAMs. Registrants are encouraged to collect all relevant available information on the substance, including any existing data (human, animal, or NAMs), (Q)SAR predictions, information generated with analogue chemicals (read-across), and in chemico and in vitro tests. In addition, REACH foresees that generating information required in Annexes VII-X may sometimes not be necessary or possible. In such cases, the standard information for the endpoint may be waived. Criteria for waiving are outlined in Column 2 of Annexes VII-X, while criteria for adapting standard information requirements are described in Annex XI of REACH ( ECHA, 2016a ). In addition to the use of OECD test guidelines, data from in vitro methods that meet internationally agreed pre-validation criteria as defined in OECD GD 34 are considered suitable for use under REACH when the results from these tests indicate a certain dangerous property. However, negative results obtained with pre-validated methods have to be confirmed with the relevant in vivo tests specified in the Annexes. Whether the aforementioned current revision of REACH and CLP regulations will bring about opportunities to include more NAMs in the assessment of industrial chemicals or lead to an increase in animal testing is to be seen.

3.3.3 Implementation of NAMs

The REACH annexes were amended in 2016 and 2017 to require companies to use NAMs for certain endpoints under certain conditions. Following these amendments, the use of non-animal tests have tripled for skin corrosion/irritation, quadrupled for serious eye damage/eye irritation, and increased more than 20-fold for skin sensitization ( ECHA, 2020 ).

REACH requires that robust study summaries be published on the ECHA website. This helps registrants identify additional data for their registrations and facilitates the identification of similar or identical substances ( ECHA, 2020 ). ECHA’s public chemical database may also be used to conduct retrospective data analyses and other research efforts, when the level of detailed data needed are present in such reports ( Luechtefeld et al., 2016a ; Luechtefeld et al., 2016b ; Luechtefeld et al., 2016c ; Luechtefeld et al., 2018 ; Knight et al., 2021 ).

ECHA engages in OECD expert groups and reviews test guidelines for both animal and non-animal methods. For example, ECHA contributed to the in vitro OECD test guidelines for skin and eye irritation in 2016 and skin sensitization in 2017. In addition, ECHA was involved in the finalization of the OECD “Defined Approaches on Skin Sensitisation Test Guideline” ( OECD, 2021a ). In October 2021, ECHA published advice on how REACH registrants can use the defined approaches guideline, and this was the first official guidance outlining how to use in silico tools, such as the QSAR Toolbox, to assess skin sensitization ( ECHA, 2021 ). Furthermore, ECHA also engages in largescale European research projects (e.g., EU-ToxRisk) where they review mock dossiers based on NAMs that have been developed in these projects.

Before registrants conduct higher-tier tests for assessing the safety of chemicals they import or manufacture, Article 40 of REACH requires that they submit details on their testing plans to ECHA ( ECHA, n.d.d ). In that submission, companies must detail how they considered NAMs before proposing an animal test. ECHA must agree on these proposals before a company can conduct a new animal test under Annex IX or X. ECHA may reject, accept, or modify the proposed test. As required by REACH, all testing proposals involving testing on vertebrate animals are published on ECHA’s website to allow citizens and organizations the opportunity to provide information and studies about the substance in question (ECHA, n.d.d). ECHA will inform the company that submitted the testing proposal of the Member State Committee’s decision and is required to take into account all studies and scientifically valid information submitted as part of the third-party consultation when making its decision.

3.3.3.1 Examples of NAM application

The most commonly used NAM under REACH is the read-across approach, where relevant information from analogous substances is used to predict the properties of target substances ( ECHA, 2020 ). Before read-across is accepted by ECHA, it must be justified by the registrant and, therefore, to facilitate its use, ECHA developed a read-across assessment framework ( ECHA, 2017 ). Additionally, ECHA is holding different expert meetings with stakeholders including industry representatives and NGOs to enhance and combine knowledge and to avoid overlap and duplication. Thus, ECHA encourages companies to avoid duplicate animal tests and share any data they have on their substance if requested by a registrant of an analogous substance. For example, based on in vitro ToxTracker assay results and read-across data from the analogue substance aminoethylpiperazine, ECHA has not requested in vivo genotoxicity data for N,N,4-trimethylpiperazine-1-ethylamine, which was registered by two companies in a joint submission ( ECHA, 2019 ) .

4 Pesticides and plant protection products

4.1 united states.

The Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA; 7 USC §136) requires all pesticides sold or distributed in the US to be registered with the EPA, unless otherwise exempted. EPA then has authority under the Federal Food, Drug, and Cosmetic Act (FD&C Act; 21 USC §301 et seq. ) to set the maximum amount of pesticide residues permitted to remain in/on food commodities or animal feed, which are referred to as tolerances. In 1996, both of these statutes were amended by the Food Quality Protection Act (FQPA), which placed new requirements on EPA, including making safety findings (i.e., “a reasonable certainty of no harm”) when setting tolerances (Pub.L. 104-170, 1996).

OPP, within EPA’s OCSPP, has the delegated authority with administering the above laws and is responsible for pesticide evaluation and registration. This includes registration of new pesticide active ingredients and products, as well as new uses for currently registered pesticides. Additionally, OPP reviews each registered pesticide at least every 15 years as part of the Registration Review process to determine whether it continues to meet registration standards. A pesticide product may not be registered unless the EPA determines that the pesticide product will not cause unreasonable adverse effects on the environment (as defined by 7 USC §136(bb)).

4.1.1 General requirements

Data requirements for pesticide registration are dependent on the type of pesticide (i.e., conventional, biopesticide, or antimicrobial) and use pattern (e.g., food versus non-food, or anticipated routes of exposure) and are laid out in 40 CFR Part 158. Unlike TSCA, FIFRA and its implementing regulations require substantial upfront testing to register a pesticide in the US, such as product chemistry data to assess labeling, product performance data to support claims of efficacy, studies to evaluate potential hazards to humans, studies to evaluate potential hazards to non-target organisms, environmental fate data, and residue chemistry and exposure studies to determine the nature and magnitude of residues. The data are used to conduct comprehensive risk assessments to determine whether a pesticide meets the standard for registration.

4.1.2 Regulatory flexibility

US regulations give EPA substantial discretion to make registration decisions based on data that the Agency deems most relevant and important for each action. As stated in the CFR, under Section 158.30, the studies and data required may be modified on an individual basis to fully characterize the use and properties of specific pesticide products under review. Also, the data requirements may not always be considered appropriate. For instance, the properties of a chemical or an atypical use pattern could make it impossible to generate the required data or the data may not be considered useful for the evaluation. As a result, Section 158.45 permits OPP to waive data requirements as long as there are sufficient data to make the determinations required by the applicable statutory standards.

To assist staff in focusing on the most relevant information and data for assessment of individual pesticides, OPP published “Guiding Principles for Data Requirements” ( EPA, 2013a ). The document describes how to use existing information about a pesticide to identify critical data needs for the risk assessment, while avoiding generation of data that will not materially influence a pesticide’s risk profile and ensuring there is sufficient information to support scientifically sound decisions. When data from animal testing will not contribute to decision making, OPP has developed processes to waive guideline studies and/or apply existing toxicological data for similar substances (i.e., bridging). Detailed guidance on the scientific information needed to support a waiver or bridging justification has been developed by OPP for acute ( EPA, 2012 ; EPA, 2016a ; EPA, 2020c ) and repeat dose ( EPA, 2013b ) mammalian studies.

Interdivisional expert committees within OPP are tasked with considering waiver requests on a case-by-case basis. The Hazard and Science Policy Council (HASPOC) is tasked with evaluating requests to waive most guideline mammalian toxicity studies, except acute systemic lethality and irritation/sensitization studies (which are referred to as the acute six-pack). HASPOC is comprised of toxicologists and exposure scientists from divisions across OPP focused on conducting human health risk assessments and it utilizes a WoE approach described in its guidance on “Part 158 Toxicology Data Requirements: Guidance for Neurotoxicity Battery, Subchronic Inhalation, Subchronic Dermal and Immunotoxicity Studies” ( EPA, 2013b ). This includes consideration of multiple lines of evidence, such as physico-chemical properties, information on exposure and use pattern, toxicological profiles, pesticidal and mammalian mode of action information, and risk assessment implications. Although this guidance was developed to address particular toxicity studies, the same general WoE approach is applied by HASPOC when considering the need for other toxicity studies for pesticide regulatory purposes. Between 2012 and 2018, the most common studies requested to be waived were acute and subchronic neurotoxicity, subchronic inhalation, and immunotoxicity studies ( Craig et al., 2019 ). For the acute six-pack studies, the Chemistry and Acute Toxicology Science Advisory Council (CATSAC) was formed to consider bridging proposals and/or waivers using the aforementioned waiving and bridging guidance documents. For example, following a retrospective analysis, the agency released guidance for waiving acute dermal toxicity tests ( US EPA, OCSPP, and OPP, 2016 ). The progress of HASPOC and CATSAC is continuously tracked and reported on an annual basis ( Craig et al., 2019 ; EPA, 2020a , 2021b ).

Beyond waiving studies that do not contribute to regulatory decision making, OPP has the ability to use relevant NAMs to replace, reduce, and refine animal studies. The CFR provides OPP with considerable flexibility under Section 158.75 to request additional data beyond the Part 158 data requirements that may be important to the risk management decision. NAMs can be considered and accepted for these additional data, when appropriate.

4.1.3 Implementation of NAMs

Several documents describe OPP’s strategies to reduce reliance on animal testing and incorporate relevant NAMs. For example, in addition to overarching EPA strategic plans (see Section 2.1.2.1.), OPP consulted the FIFRA Scientific Advisory Panel (SAP) on strategies and initial efforts to incorporate molecular science and emerging in silico and in vitro technologies into an enhanced IATA ( EPA, 2011 ). The long-term goal identified for this consultation was a transition from a paradigm that requires extensive in vivo testing to a hypothesis-driven paradigm where NAMs play a larger role.

Unlike TSCA that requires OPPT to maintain a (non-exhaustive) list of NAMs that are accepted, OPP does not have a similar statutory requirement. However, OPP does maintain a website with strategies for reducing and replacing animal testing based on studies and approaches that are scientifically sound and supportable ( EPA, 2022a ). For many of these strategies, OPP has worked closely with other EPA offices, including OPPT and ORD, to develop and implement plans and tools that advance NAMs. Additionally, OPP works with a wide range of external organizations and stakeholders, including other US federal agencies, international regulatory agencies, animal protection groups, and pesticide registrants.

These collaborations have resulted in several agency documents for specific NAM applications. As mentioned in previous sections, there have been national and international efforts to develop defined approaches for skin sensitization in which OPP participated, along with OPPT, PMRA, ECHA, and other stakeholders. In 2018, a draft policy document was published jointly by OPP and OPPT on the use of alternatives approaches ( in silico , in chemico, and in vitro ) that can be used to evaluate skin sensitization in lieu of animal testing with these approaches accepted as outlined in the draft policy upon its release ( EPA, 2018b ). As international work develops through the OECD, this policy will be updated to accept additional defined approaches as appropriate. OPP also has a policy on the “Use of an Alternate Testing Framework for Classification of Eye Irritation Potential of EPA Pesticide Products,” which focuses on the testing of antimicrobial cleaning products but can be applied to conventional pesticides on a case-by-case basis ( EPA, 2015 ).

Collaborative efforts have also resulted in numerous publications in scientific journals that allow for communication of scientific advancements and analyses, while building confidence in NAM approaches that can support regulatory decisions. For example, analyses have been published demonstrating that many of the in vitro or ex vivo methods available for eye irritation are equivalent or scientifically superior to the rabbit in vivo test ( Clippinger et al., 2021 ). Additionally, OPP established a pilot program to evaluate a mathematical tool (GHS Mixtures Equation) as an alternative to animal oral inhalation toxicity studies for pesticide formulations. After closing the submission period in 2019, OPP worked with NICEATM to conduct retrospective analyses, which demonstrated the utility of the GHS Mixtures Equation to predict oral toxicity, particularly for formulations with lower toxicity ( Hamm et al., 2021 ). Furthermore, OPP participated in a project to rethink chronic toxicity and carcinogenicity assessment for agrochemicals (called “ReCAAP”). The workgroup, consisting of scientists from government, academia, non-governmental organizations, and industry stakeholders, aimed to develop a reporting framework to support a WoE safety assessment without conducting long-term rodent bioassays. In 2020, an EPA Science Advisory Board meeting was held to discuss reducing the use of animals for chronic and carcinogenicity testing, which included comment on the ReCAAP project (EPA, 2020f), and feedback from the consultation was incorporated into a published framework ( Hilton et al., 2022 ).

4.1.3.1 Examples of NAM application

OPP has recently used NAMs to derive points of departure for human health risk assessment. For isothiazolinones, which are material preservatives that are known dermal sensitizers, NAMs were utilized to support a quantitative assessment ( EPA, 2020g ). In chemico and in vitro assays were performed on each chemical to derive concentrations that can cause induction of skin sensitization and were used as the basis of the quantitative dermal sensitization evaluation. The NAM approaches used in the assessment have been shown to be more reliable, human-relevant, and mechanistically driven, and able to better predict human sensitizing potency when compared to the reference test method, the mouse local lymph node assay ( EPA, 2020d ).

In addition, as part of a registration review, a NAM approach was used to evaluate inhalation exposures for the fungicide chlorothalonil, which is a respiratory contact irritant ( EPA, 2021c ). The approach utilizes an in vitro assay to derive an inhalation point of departure in conjunction with in silico dosimetry modeling to calculate human equivalent concentrations for risk assessment ( Corley et al., 2021 ; McGee Hargrove et al., 2021 ). The approach, which was reviewed and supported by a FIFRA Scientific Advisory Panel ( EPA, 2018a ), provided an opportunity to overcome challenges associated with testing respiratory irritants, while also incorporating human relevant information.

Further, OPP has been shifting its testing focus from developmental neurotoxicity (DNT) guideline studies to more targeted testing approaches. In addition to evaluating life stage sensitivity with studies based on commonly accepted modes of action, such as comparative cholinesterase assays and comparative thyroid assays, researchers from ORD have participated in an international effort over the past decade to develop a battery of NAMs for fit-for-purpose evaluation of DNT ( Fritsche et al., 2017 ; Bal-Price et al., 2018a ; Bal-Price et al., 2018b ; Sachana et al., 2019 ). As part of this effort, ORD researchers developed in vitro assays using microelectrode array network formation array (MEA NFA) and high-content imaging (HCI) platforms to evaluate critical neurodevelopmental processes. Additional in vitro assays have been developed by researchers funded by EFSA and, together with the ORD assays, form the current DNT NAM battery. The FIFRA SAP supported the use of the data generated by the DNT NAM battery as part of a WoE for evaluating DNT potential and recognized the potential for the battery to continuously evolve as the science advances ( EPA, 2020i ). The OECD DNT expert group, which includes staff from OPP and ORD as well as representatives from other US agencies (e.g., NTP, FDA), is also considering several case studies on integrating the DNT battery into an IATA. Furthermore, data from the battery along with toxicokinetic assessment and available in vivo data were recently used in a WoE to support a DNT guideline study waiver ( Dobreniecki et al., 2022 ).

OPP also collaborated with NICEATM to complete retrospective analyses of dermal penetration triple pack studies ( Allen et al., 2021 ). A triple pack consists of an in vivo animal study and in vitro assays using human and animal skin and are used to derive DAFs applied to convert oral doses to dermal-equivalent doses to assess the potential risk associated with dermal exposures. The retrospective analyses demonstrated that in vitro studies alone provide similar or more protective estimates of dermal absorption with limited exception. The use of human skin for human health risk assessment has the added advantage of being directly relevant to the species of interest and avoiding overestimation of dermal absorption using rat models. These analyses are being used by OPP to support its consideration of results from acceptable in vitro studies in its WoE evaluations to determine an appropriate dermal absorption factor (DAF) for human health risk assessment on a chemical-by-chemical basis.

In Canada, pest control products and the corresponding technical grade active ingredient are regulated under the Pest Control Products Act (PCPA; SC 2002, c.28). The PCPA and its associated Regulations govern the manufacture, possession, handling, storage, transport, importation, distribution, and use of pesticides in Canada. Pesticides, as defined in the PCPA, are designed to control, destroy, attract, or repel pests, or to mitigate or prevent pests’ injurious, noxious, or troublesome effects. Therefore, these properties and characteristics that make pesticides effective for their intended purposes may pose risks to people and the environment.

PMRA is the branch of Health Canada responsible for regulating pesticides under the authority of the PCPA. Created in 1995, PMRA consolidates the resources and responsibilities for pest management regulation in Canada. PMRA’s primary mandate is to prevent unacceptable risks to Canadians and the environment from the use of these products. Section 7 of the PCPA provides the authority for PMRA to apply modern, evidence-based scientific approaches to assess whether the health and environmental risks of pesticides proposed for registration (or amendment) are acceptable, and that the products have value. Section 16 of the PCPA provides the legislative oversight for PMRA to take the same approach when regularly and systematically reviewing whether pesticides already on the Canadian market continue to meet modern scientific standards. PMRA’s guidance document “A Framework for Risk Assessments and Risk Management of Pest Control Products” provides the well-defined and internationally recognized approach to risk assessment, management, and decision-making. This framework includes insights on how interested and affected parties are involved in the decision-making process. It also describes the components of the risk (health and environment) and value assessments. For example, the value assessment’s primary consideration is whether the product is efficacious. In addition, this assessment contributes to the establishment of the use conditions required to assess human health and environment risks ( HC, 2021a ).

4.2.1 General requirements

In Canada, many pest control products are categorized as conventional chemicals, and include insecticides, fungicides, herbicides, antimicrobials, personal insect repellents, and certain companion animal products such as spot-on pesticides for flea and tick control. Non-conventional chemicals, such as biopesticides (e.g., microbial pest control agents) and essential oil-based personal insect repellents, are also regulated under the PCPA.

The scope of the information provided in this section is most applicable to the health component of the risk assessment for domestic registrations of conventional chemicals (the end-use product and active ingredient). The information provided hereafter excludes the value and environment components along with products, such as food items (e.g., table salt), which are of interest to the organic growers in Canada. Biopesticides and non-conventionals are also outside the scope of this paper.

PMRA relies on a system that links the data requirements (data-code or DACO tables) to proposed use-sites, which are organized using three categories: Agriculture, Industry, and Society ( HC, 2006 ). Given that pest control products can be used on more than one use-site, these sites are further sub-categorized. For example, PMRA’s use-site category 14 is for “Terrestrial Food Crops” and includes crops grown outdoors as a source for human consumption ( HC, 2013b ). The system of linking DACOs with use-site categories is similar to what is used by the US EPA and internationally by the OECD ( HC, 2006 ). The PMRA DACO tables include required (R) and conditionally required (CR) data that are tailored for each use site and take into consideration potential routes, durations, and sources of exposure to humans and the environment. It is important to note that the CR data are only required under specified conditions. In addition, PMRA will consider a request to waive any data requirement, but such waiver requests must be supported by a scientific rationale demonstrating that the data are not required to ensure the protection of human health. In particular, PMRA published a guidance document for waiving or bridging of mammalian acute toxicity tests for pesticides in 2013 ( HC, 2013a ). This document served as the starting point for the development and subsequent release of the 2017 OECD technical document on the same subject ( OECD, 2017 ).

4.2.2 Regulatory flexibility

The specific data requirements for the registration of pest control products in Canada are not prescribed in legislation under the PCPA. PMRA, therefore, has greater flexibility in either adopting or adapting methods under the PCPA in comparison to other jurisdictions where these data requirements are established in law. Therefore, while the PCPA provides the overarching components for the assessments (i.e., health, environment, and value) it also provides for the flexibility to use policy instruments along with guidance documents to provide the details on the data requirements to satisfy these legislative components. This approach also provides the opportunity for PMRA to engage all stakeholders through webinars, meetings, and public consultations when developing or making major changes to these documents. This open and transparent approach is aligned with PMRA’s strategic plan ( HC, 2016d ), which includes incorporating modern science by building scientific, regulatory, and public confidence in these approaches through collaborative processes. The ability to rely on policy instruments and guidance documents does not preclude PMRA from making regulatory changes, when necessary; however, the experience, thus far, with NAMs supports the current approach of relying on multi-stakeholder collaborations that result in the development of guidance documents, science policy notes, and/or published articles in reputable scientific journals.

4.2.3 Implementation of NAMs

PMRA’s 2019-2020 annual report highlights the 25th anniversary of this branch of Health Canada while noting a major transformation initiative of the pesticide program ( HC, 2021e ). Building upon the strategic plan (see Section 2.2.2), the program renewal project considers the changing landscape and the need for PMRA to keep pace with this change. The 2019-2020 and 2020-2021 reports include a section on evaluating new technologies, which includes opportunities to reduce animal testing wherever possible. Specifically, the use of NAMs, including in vitro assays, predictive in silico models, mechanistic studies, and existing data for human health and environmental assessment of pesticides is noted ( HC, 2021e ; Hc, 2022b ).

Bhuller et al. (2021) provides the first Canadian regulatory perspective on the approach and process towards the implementation of NAMs in Canada for pesticides and industrial chemicals ( Bhuller et al., 2021 ). It acknowledges foundational elements, such as the 2012 Council of Canadian Academies ( CCA, 2012 ) expert panel report, “Integrating Emerging Technologies into Chemical Assessment,” used to establish the overall vision. The process for identifying, exploring, and implementing NAMs emphasizes the importance of mobilizing teams and fostering a mindset that enables a regulatory pivot towards NAMs. In addition, the importance of engagement and multi-stakeholder collaboration is identified as a pillar for building regulatory, scientific, and public confidence in NAMs along with the broader acceptance of the alternative approaches.

PMRA collaborates with stakeholders on the development of NAMs and their potential implementation for regulatory purposes. For example, PMRA is collaborating with the interested community through several ongoing multi-stakeholder initiatives designed to explore NAMs, at the national and international levels ( Bhuller et al., 2021 ). Another example is several academic-led initiatives along with research and consulting firms that are immersed in developing models, including open-source models. This includes the University of Windsor’s Canadian Centre for Alternatives to Animal Methods (CCAAM) and CaCVAM. Within Health Canada, voluntary efforts amongst regulatory and research scientists have resulted in the publication of NAM-relevant documents, such as the current Health Canada practices for using toxicogenomics data in risk assessment ( HC, 2019 ).

4.2.3.1 Examples of NAM application

Multiple NAMs and alternatives to animal testing have been co-developed, adapted, or adopted by the PMRA. Examples include the OECD defined approach for skin sensitization ( OECD, 2021a ), use of a WoE framework for chronic toxicity and cancer assessment ( Hilton et al., 2022 ), and PMRA’s “Guidance for Waiving or Bridging of Mammalian Acute Toxicity Tests for Pesticides” ( HC, 2013a ). In addition, PMRA no longer routinely requires the acute dermal toxicity assay ( HC, 2017 ), the one-year dog toxicity test ( Linke et al., 2017 ; HC, 2021d ), or the in vivo dermal absorption study ( Allen et al., 2021 ) in alignment with the US EPA. PMRA will consider these and other NAMs in lieu of animal testing for specific pesticides by applying a WoE approach to ensure that the available information is sufficient and appropriate for hazard characterization and the assessment of potential human health risks.

Building upon the strategic plan and the importance of staying current with scientific advancements in an open and transparent manner, PMRA’s DACO guidance document for conventional pesticides includes a document history table that enables PMRA to demonstrate the “evergreen” nature of the DACOs while providing an overview of the changes and the corresponding rationales ( HC, 2021d ). For example, PMRA’s science-policy work, resulting in no longer routinely requiring the acute dermal toxicity study, is captured in this table with a reference to the science-policy document (SPN 2017-03) ( HC, 2017 ). The latter then provides details on public consultation processes and the robust retrospective analysis that was undertaken under the auspices of the RCC ( HC, 2017 ).

4.3 European Union

In the EU, the term “pesticides” includes (1) active ingredient and PPPs, which are intended for use on plants in agriculture and horticulture, and (2) biocides, which are used in non-botanical applications, such as rodenticides or termiticides. PPPs and their active ingredients are regulated under Regulation (EC) No 1107/2009 ( EC, 2009a ). Commission Regulation (EU) No 283/2013 lists the data requirements for active ingredients ( EU, 2013c ), and Commission Regulation (EU) No 284/2013 lists the data requirements for PPPs ( EU, 2013d ). Biocides, however, are regulated separately under Regulation (EU) No 528/2012 and are not discussed in this paper ( EU, 2012 ). In addition, the CLP regulation (see Section 3.3) applies to both PPPs and biocides.

The EU is a diverse group of countries as it relates to food consumption, agricultural pests, weather, and level of development, thus, the risk assessment and management procedures were developed to account for the varied needs of different Member States. First, an evaluation of the active ingredient dossier is conducted by a Rapporteur Member State. Then, EFSA peer reviews the dossier evaluation. The peer reviewed risk assessment of the active ingredient is considered by the European Commission, who makes a proposal on whether to authorize the active ingredient, followed by the EU Member States, who vote on final risk management decisions. Once an active ingredient is authorized, individual Member States consider applications for approval of PPPs containing that active ingredient and propose maximum levels of pesticide residues permitted to remain in/on food commodities or animal feed. Finally, the European Commission (often with input from EFSA) will decide whether to approve those maximum residue levels.

Regulation of biocidal active ingredients and products proceeds via a similar route; however, peer review of the Member State assessments of the active ingredients are conducted by ECHA rather than EFSA. In 2017, ECHA and EFSA signed a memorandum of understanding to enhance cooperation between the agencies to facilitate coherence in scientific methods and opinions, and to share knowledge on matters of mutual interest. As a consequence, both agencies will evaluate the toxicological data package for a PPP.

4.3.1 General requirements

Similar to the US and Canada, there are a large number of up-front data requirements required to register a plant protection active ingredient in the EU, including studies to assess potential hazards to humans and non-target organisms. The toxicology data requirements for support of an active ingredient or PPP are listed in Commission Regulation (EU) No 283/2013 and Commission Regulation (EU) No. 284/2013, respectively, and can be fulfilled using OECD test guideline studies or other guidelines (such as US EPA guidelines) that address the toxicological endpoint of concern. A number of data requirements, such as in vivo neurotoxicity studies or two-year rodent cancer bioassays in a second species, are only required when triggered or with scientific justification.

4.3.2 Regulatory flexibility

Article 62(1) of Regulation (EC) No 1107/2009 requires that “testing on vertebrate animals for the purposes of this Regulation shall be undertaken only where no other methods are available.” Article 8(1)(d) and Article 33(3)(c) of the same Regulation requires applicants to justify, for each study using vertebrate animals, the steps taken to avoid testing on animals or duplication of studies. Similarly, for biocides, Article 62(1) of Regulation (EC) No 528/2012 states that “[i]n order to avoid animal testing, testing on vertebrates for the purposes of this Regulation shall be undertaken only as a last resort.”

The Commission Regulations, which list the data requirements for plant protection active ingredients and products, and their respective Communications ( EU, 2013a ; EU, 2013b ) were published in 2013 and therefore only refer specifically to a limited number of NAMs (e.g., in vitro and ex vivo methods to assess skin irritation and eye irritation).

Although point 5.2 in the Annex of both Commission Regulation (EU) No 283/2013 and 284/2013 allow for the use of other NAMs, as they become available, to replace or reduce animal use, the outdated list of methods to fulfil data requirements in the Commission Communications may encourage animal use where NAMs should be used. For example, the methods listed to fulfil the requirements for skin sensitization do not include any of the available in chemico or in vitro methods and do not refer to the OECD Guideline on Defined Approaches to Skin Sensitization ( EU, 2013a ; EU, 2013b ; OECD, 2021a ). Therefore, the Commission Communications need to be updated urgently and regularly to avoid the use of animals.

As outlined above, the regulatory landscape of the EU is one of specific regional considerations and interpretation of legislation by individual Member States. For example, some Member State regulatory authorities responsible for PPPs, including those from the Czech Republic ( SZU, n.d .), Sweden ( KEMI, 2021 ), and Slovenia ( Republika Slovenija, 2022 ), publicly align themselves with the legal requirement to justify the conduct of studies using vertebrate animals. Other Member States regulatory authorities, including those from the Netherlands ( Ctgb, n.d. ) and, pre-Brexit, the United Kingdom ( HSE, n.d .), interpret the regulation more rigorously and state that applicants or dossiers will not be considered if they are found to have breached Article 62 (testing on vertebrate animals as a last resort).

4.3.3 Implementation of NAMs

EFSA has been proactive in reducing animal testing and implementing reliable NAMs. For example, in 2009, EFSA published a scientific opinion covering the key data requirements for evaluation of pesticide toxicity that were amendable to NAMs ( EFSA, 2009 ). In 2012, EFSA initiated a series of scientific conferences to create a regular opportunity to engage with partners and stakeholders. Following its latest conference in 2018 and the break-out session “Advancing risk assessment science—human health,” Lanzoni et al. have emphasized that the human health risk assessment based on animal testing is challenged scientifically and ethically ( Lanzoni et al., 2019 ). They further mention the need for a paradigm shift in hazard and risk assessment and more flexible regulations.

EFSA has developed a chemical hazards database “OpenFoodTox 2.0” and funded collaborative research to develop generic toxicokinetic and toxicodynamic human and animal models to predict the toxicity of chemicals ( Dorne et al., 2018 ; Benfenati et al., 2020 ). Further, in 2019, EFSA published their opinion on the use of in vitro comparative metabolism (IVCM) studies for use in pesticide risk assessment ( EFSA, 2019 ). Currently, the IVCM study is a data requirement for new and renewal data packages being submitted in the EU. This study is intended to identify unique human metabolites as it compares to OECD TG 417, the toxicokinetic study currently performed in rats. Most recently, EFSA published their 2027 Strategy in which they state their goal, to develop and integrate NAMs for regulatory risk assessment ( EFSA, 2021 ). To help achieve this, EFSA launched a contract to develop a roadmap for action on NAMs to reduce animal testing ( Escher et al., 2022 ). The roadmap aims to define EFSA’s priorities for the incorporation of NAMs as well as to inform a multi-annual strategy for increasing the use of NAMs in human health risk assessment with a goal of minimizing animal testing ( EFSA, 2021 ). In addition, EFSA is in the process of developing guidance on the use of read-across and has launched several projects to evaluate NAMs in the context of IATA frameworks.

4.3.3.1 Examples of NAM application

EFSA has funded the development of in vitro assays that, together with the assays from ORD, form the current DNT NAM testing battery (see Section 4.1.3.1). In partnership with the OECD, EFSA held a workshop in 2017 on integrated approaches for testing and assessment of DNT ( EFSA and OECD, 2017 ), commissioned an external scientific report on the data interpretation from in vitro DNT assays ( Crofton and Mundy, 2021 ), and recently held a European stakeholder’s workshop on NAMs for DNT ( EFSA, 2022 ). In 2021, the EFSA Panel on Plant Protection Products and their Residues (PPR) developed AOP-informed IATA case studies on DNT risk assessment ( EFSA PPR Panel et al., 2021 ). The development of a new OECD Guidance Document on DNT in vitro assays is being co-led by EFSA, the US, and Denmark ( OECD, 2021b ).

In 2017, EFSA updated its guidance on dermal absorption that was initially published in 2012. The guidance presents elements for a tiered approach including “ in vitro studies with human skin (regarded to provide the best estimate)” ( EFSA et al., 2017 ), thereby reducing the use of animals while also increasing the relevance of the data for human risk assessment.

Furthermore , in silico modeling software, data mining, and read-across can be used for a variety of applications in support of pesticide registrations within the EU. Specifically, OECD-Toolbox, Derek Nexus, and OASIS TIMES are often used for the evaluation of toxicological significance of metabolites and impurities, and in support of active ingredient conclusions, especially related to genotoxicity ( Benigni et al., 2019 ).

5 Conclusion

Due to widespread interest in the use of testing approaches that are reliable and relevant to human biology, NAMs for hazard and risk assessment are being rapidly developed. It is important to understand the existing regulatory frameworks, and their flexibility or limitations for the implementation of fit for purpose NAMs. This article provides an overview of the regulatory frameworks for the use of NAMs in the assessment of industrial chemicals and pesticides, in the US, Canada, and EU. However, similar collaborative efforts and opportunities to use NAMs in regulatory submissions exist in other sectors and countries. In general, replacing animal use is an important goal for regulatory agencies and, as such, regulators continue to explore the potential of NAMs to efficiently provide more reliable and relevant information about whether and how a chemical may cause toxicity in humans. The regulations reviewed in this paper highlight the many existing opportunities for the use of NAMs, while also showing potential to introduce further flexibility in testing requirements to allow the maximum use of fit for purpose NAMs.

For example, it is important to provide continuing educational opportunities for regulators and stakeholders on the conditions under which application of a certain NAM is appropriate and on how data from that NAM is interpreted. Conferences and webinars, as mentioned in Section 2, are examples of such opportunities. There are also ongoing discussions on how to streamline and accelerate validation processes and gain scientific confidence in the use of robust NAMs, including an ongoing effort within ICCVAM to publish a guidance on this topic. Updating these processes is foundational to timely uptake of fit-for-purpose, reliable, and relevant NAMs ( van der Zalm et al., 2022 ). Also key to the advancement of NAMs is the opportunity to discuss proposed NAM testing strategies with the agency. This allows for the wise use of resources and ensures that data needs of the regulatory agencies are being addressed by the proposed approach. Each regulatory agency has varying ability and instructions on meeting with stakeholders to discuss proposed testing strategies, with some agencies (notably the EPA and HC’s NSP) strongly encouraging these meetings, resulting in examples of successful submissions. Additional measures to instate incentives, such as expedited review, would further facilitate innovation and the use of more modern, reliable NAMs.

In addition, national and international communication and collaboration within and across sectors and geographies is of the utmost importance to minimize duplicative efforts and efficiently advance the best science. Ultimately, regulatory frameworks that allow for the timely uptake of scientifically sound toxicology testing approaches will facilitate the global acceptance of NAMs and allow the best protection of human health.

Author contributions

All authors contributed important intellectual content and helped in the conceptualization, writing and revisions of the article. All authors read and approved the final manuscript.

Acknowledgments

The authors would like to thank Dr. John Gordon from CPSC for providing text for the section on consumer product and reviewing the manuscript, Drs. Cecilia Tan and Anna Lowit from EPA, Mike Rasenberg from ECHA, Dr. George Kass from EFSA, Dr. Alexandra Long and Joelle Pinsonnault Cooper from HC, Dr. Gilly Stoddart, Emily McIvor, and Anna van der Zalm from PSCI for reviewing parts of the manuscript.

Conflict of interest

Author JH was employed by the company Corteva Agriscience. Authors CH and JM-H were employed by the company JT International SA. Authors EN and TS were employed by the law firm Bergeson & Campbell PC.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Author disclaimer

This report has been reviewed and cleared by the Office of Chemical Safety and Pollution Prevention of the US EPA.

The views expressed in this article are those of the authors and do not necessarily represent the views or policies of their respective employers. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.

1 The distinction between an order and a rule is that the former may be issued without following the procedural requirements of notice and comment rulemaking under the Administrative Procedure Act (5 U.S.C. §§500 et seq .), whereas the latter must comply with these requirements.

Allen, D. G., Rooney, J., Kleinstreuer, N. C., Lowit, A. B., and Perron, M. (2021). Retrospective analysis of dermal absorption triple pack data. ALTEX 38, 463–476. doi:10.14573/altex.2101121

PubMed Abstract | CrossRef Full Text | Google Scholar

Bal-Price, A., Hogberg, H., Crofton, K. M., Daneshian, M., FitzGerald, R. E., Fritsche, E., et al. (2018a). Recommendation on test readiness criteria for new approach methods in toxicology: Exemplified for developmental neurotoxicity. ALTEX 35, 306–352. doi:10.14573/altex.1712081

Bal-Price, A., Pistollato, F., Sachana, M., Bopp, S. K., Munn, S., and Worth, A. (2018b). Strategies to improve the regulatory assessment of developmental neurotoxicity (DNT) using in vitro methods. Toxicol. Appl. Pharmacol. 354, 7–18. doi:10.1016/j.taap.2018.02.008

Benfenati, E., Carnesecchi, E., Roncaglioni, A., Baldin, R., Ceriani, L., Ciacci, A., et al. (2020). Maintenance,update and further development of EFSA’s chemical hazards: OpenFoodTox 2.0. EFSA Support 17, 1–36. doi:10.2903/sp.efsa.2020.EN-1822

CrossRef Full Text | Google Scholar

Benigni, R., Laura Battistelli, C., Bossa, C., Giuliani, A., Fioravanzo, E., Bassan, A., et al. (2019). Evaluation of the applicability of existing (Q)SAR models for predicting the genotoxicity of pesticides and similarity analysis related with genotoxicity of pesticides for facilitating of grouping and read across. EFSA Support . 16, 1–221. doi:10.2903/sp.efsa.2019.EN-1598

Bhuller, Y., Ramsingh, D., Beal, M., Kulkarni, S., Gagne, M., and Barton-Maclaren, T. S. (2021). Canadian regulatory perspective on next generation risk assessments for pest control products and industrial chemicals. Front. Toxicol. 3, 748406. doi:10.3389/FTOX.2021.748406

CCA (2012). Integrating emerging technologies into chemical safety assessment . Ottawa: The Council of Canadian Academies . Council of Canadian Academies. Available at: https://cca-reports.ca/reports/integrating-emerging-technologies-into-chemical-safety-assessment .

Google Scholar

Clippinger, A. J., Raabe, H. A., Allen, D. G., Choksi, N. Y., van der Zalm, A. J., Kleinstreuer, N. C., et al. (2021). Human-relevant approaches to assess eye corrosion/irritation potential of agrochemical formulations. Cutan. Ocul. Toxicol. 40, 145–167. doi:10.1080/15569527.2021.1910291

Corley, R. A., Kuprat, A. P., Suffield, S. R., Kabilan, S., Hinderliter, P. M., Yugulis, K., et al. (2021). New approach methodology for assessing inhalation risks of a contact respiratory cytotoxicant: Computational fluid dynamics-based aerosol dosimetry modeling for cross-species and in vitro comparisons. Toxicol. Sci. 182, 243–259. doi:10.1093/toxsci/kfab062

CPSC (2022). Guidance for Industry and Test Method Developers: CPSC Staff Evaluation of Alternative Test Methods and Integrated Testing Approaches and Data Generated from Such Methods to Support FHSA Labeling Requirements . Rockville, MD: U.S. Consumer Product Safety Commission . Available at: https://downloads.regulations.gov/CPSC-2021-0006-0010/content.pdf .

Craig, E., Lowe, K., Akerman, G., Dawson, J., May, B., Reaves, E., et al. (2019). Reducing the need for animal testing while increasing efficiency in a pesticide regulatory setting: Lessons from the EPA office of pesticide programs’ hazard and science policy Council. Regul. Toxicol. Pharmacol. 108, 104481. doi:10.1016/j.yrtph.2019.104481

Crofton, K. M., and Mundy, W. R. (2021). External scientific report on the interpretation of data from the developmental neurotoxicity in vitro testing assays for use in integrated approaches for testing and assessment. EFSA Support . 18, 1–42. doi:10.2903/sp.efsa.2021.en-6924

Ctgb (n.d). Request for information vertebrates testing. Available at: https://english.ctgb.nl/plant-protection/types-of-application/request-for-information-vertebrates-testing/characteristics .

Dobreniecki, S., Mendez, E., Lowit, A., Freudenrich, T. M., Wallace, K., Carpenter, A., et al. (2022). Integration of toxicodynamic and toxicokinetic new approach methods into a weight-of-evidence analysis for pesticide developmental neurotoxicity assessment: A case-study with dl- and L-glufosinate. Regul. Toxicol. Pharmacol. 131, 105167. doi:10.1016/j.yrtph.2022.105167

Dorne, J.-L. C. M., Amzal, B., Quignot, N., Wiecek, W., Grech, A., Brochot, C., et al. (2018). Reconnecting exposure, toxicokinetics and toxicity in food safety: OpenFoodTox and TKplate for human health, animal health and ecological risk assessment. Toxicol. Lett. 295, S29. doi:10.1016/j.toxlet.2018.06.1128

EC (2006). Regulation (EC) No 1907/2006 of the European parliament and of the Council of 18 december 2006 concerning the registration, evaluation, authorisation and restriction of chemicals (REACH). OJ L 396/1.

EC (2008a). Council regulation (EC) No 440/2008 of 30 may 2008 laying down test methods pursuant to regulation (EC) No 1907/2006 of the European parliament and of the Council on the registration, evaluation, authorisation and restriction of chemicals (REACH). OJ L 142/1.

EC (2008b). Regulation (EC) No 1272/2008 of the European Parliament and of the Council of 16 December 2008 on classification, labelling and packaging of substances and mixtures, amending and repealing Directives 67/548/EEC and 1999/45/EC, and amending Regulation (EC). OJ L 353/1.

EC (2009a). Regulation (EC) No 1107/2009 of the European Parliament and of the Council of 21 October 2009 concerning the placing of plant protection products on the market and repealing Council Directives 79/117/EEC and 91/414/EEC. OJ L 309/1.

EC (2009b). Regulation (EC) No 1223/2009 of the European parliament and of the Council of 30 november 2009 on cosmetic products. OJ L 342/59.

EC (2020). Chemicals strategy for sustainability - towards a toxic-free environment. Available at: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2020:667 .

ECHA (2014). Clarity on interface between REACH and the cosmetics regulation. Available at: https://echa.europa.eu/nl/-/clarity-on-interface-between-reach-and-the-cosmetics-regulation .

ECHA (2016a). How to use alternatives to animal testing to fulfil the information requirements for REACH registration Practical guide . 2nd ed. Helsinki: European Chemicals Agency . doi:10.2823/194297

ECHA (2016b). New approach methodologies in regulatory science . Proceedings of a scientific workshop . Helsinki: European Chemicals Agency . doi:10.2823/543644

ECHA (2017). Read-across assessment framework (RAAF).

ECHA (2019). Registration dossier - N,N,4-trimethylpiperazine-1-ethylamine. Available at: https://echa.europa.eu/nl/registration-dossier/-/registered-dossier/27533/7/7/1 .

ECHA (2020). The use of alternatives to testing on animals for REACH . Fourth report under Article 117(3) of the REACH Regulation. Helsinki: European Chemicals Agency . doi:10.2823/092305

ECHA (2021). Skin sensitisation . European chemicals agency. Available at: https://echa.europa.eu/documents/10162/1128894/oecd_test_guidelines_skin_sensitisation_en.pdf/40baa98d-fc4b-4bae-a26a-49f2b0d0cf63?t=1633687729588 .

ECHA (n.d.a). Enforcement. Available at: https://echa.europa.eu/regulations/enforcement .

ECHA (n.d.b). Information requirements. Available at: https://echa.europa.eu/regulations/reach/registration/information-requirements .

ECHA (n.d.c). Registration. Available at: https://echa.europa.eu/regulations/reach/registration .

ECHA (n.d.d). Testing proposals. Available at: https://echa.europa.eu/information-on-chemicals/testing-proposals .

ECHA (n.d.e). The role of testing in CLP. Available at: https://echa.europa.eu/testing-clp .

ECHA (n.d.f). Understanding REACH. Available at: https://echa.europa.eu/regulations/reach/understanding-reach .

EFSA, and OECD (2017). Workshop Report on integrated approach for testing and assessment of developmental neurotoxicity. EFSA Support 1191, 19. doi:10.2903/sp.efsa.2017.en-1191

EFSA, Buist, H., Craig, P., Dewhurst, I., Hougaard Bennekou, S., and Kneuer, C. (2017). Guidance on dermal absorption. EFSA J. 15, e04873. doi:10.2903/j.efsa.2017.4873

EFSA PPR Panel, Hernández-Jerez, A., Adriaanse, P., Aldrich, A., Berny, P., and Coja, T. (2021). Development of Integrated Approaches to Testing and Assessment (IATA) case studies on developmental neurotoxicity (DNT) risk assessment. EFSA J. 19. doi:10.2903/j.efsa.2021.6599

EFSA (2009). Existing approaches incorporating replacement, reduction and refinement of animal testing: Applicability in food and feed risk assessment. EFSA J. 7, 1–63. doi:10.2903/j.efsa.2009.1052

EFSA (2019). EFSA Workshop on in vitro comparative metabolism studies in regulatory pesticide risk assessment. EFSA Support . 16, 1–16. doi:10.2903/sp.efsa.2019.EN-1618

EFSA (2021). EFSA strategy 2027: Science, safe food, sustainability . Parma: Publications Office . doi:10.2805/886006

EFSA (2022). European stakeholders’ workshop on new approach methodologies (NAMs) for developmental neurotoxicity (DNT) and their use in the regulatory risk assessment of chemicals. Available at: https://www.efsa.europa.eu/en/events/european-stakeholders-workshop-new-approach-methodologies-nams-developmental-neurotoxicity .

EPA (1994). Estimating toxicity of industrial chemicals to aquatic organisms using structure activity relationships . 2nd ed. Washington, DC: U.S. Environmental Protection Agency . EPA-R93-001.

EPA (2009). The U.S. Environmental protection agency’s strategic plan for evaluating the toxicity of chemicals . Washington, DC: U.S. Environmental Protection Agency .

EPA (2011). Integrated Approaches to Testing and assessment strategy : Use of new Computational and molecular tools . FIFRA scientific advisory panel, Office of pesticide programs. Washington, DC: U.S. Environmental Protection Agency . EPA-HQ-OPP-2011-0284-0006. Available at: https://www.regulations.gov/document/EPA-HQ-OPP-2011-0284-0006

EPA (2012). Guidance for waiving or bridging of mammalian acute toxicity tests for pesticides and pesticide products (acute oral, acute dermal, acute inhalation, primary eye, primary dermal, and dermal sensitization) . Office of Pesticide Programs . Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/default/files/documents/acute-data-waiver-guidance.pdf

EPA (2013a). Guiding principles for data requirements . Office of Pesticide Programs . Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/production/files/2016-01/documents/data-require-guide-principle.pdf

EPA (2013b). Part 158 toxicology data requirements : Guidance for neurotoxicity battery, subchronic inhalation, subchronic dermal and immunotoxicity studies . Office of Pesticide Programs, Washington, DC: U.S. Environmental Protection Agency . Available at: https://www.epa.gov/sites/default/files/2014-02/documents/part158-tox-data-requirement.pdf

EPA (2015). Use of an alternate testing framework for classification of eye irritation potential of EPA pesticide products . Office of Pesticide Programs . Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/production/files/2015-05/documents/eye_policy2015update.pdf

EPA (2016a). Guidance for waiving acute dermal toxicity tests for pesticide formulations & supporting retrospective analysis . Office of Pesticide Programs . Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/default/files/2016-11/documents/acute-dermal-toxicity-pesticide-formulations_0.pdf

EPA (2016b). Process for evaluating & implementing alternative approaches to traditional in vivo acute toxicity studies for FIFRA regulatory use. Available at: https://www.epa.gov/sites/default/files/2016-03/documents/final_alternative_test_method_guidance_2-4-16.pdf .

EPA (2018a). FIFRA scientific advisory panel; notice of public meeting: Evaluation of a proposed approach to refine inhalation risk assessment for point of contact toxicity. Available at: https://www.regulations.gov/docket/EPA-HQ-OPP-2018-0517 .

EPA (2018b). Interim science policy: Use of alternative approaches for skin sensitization as a replacement for laboratory animal testing . Office of Chemical Safety and Pollution Prevention . Washington, DC: U.S. Environmental Protection Agency. EPA-HQ-OPP-2016-0093-0090. Available at: https://www.epa.gov/pesticides/epa-releases-draft-policy-reduce-animal-testing-skin-sensitization (Accessed April 4, 2018).

EPA (2018c). List of alternative test methods and strategies (or new approach methodologies [NAMs]). Office of pollution prevention and toxics. Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/production/files/2018-06/documents/alternative_testing_nams_list_june22_2018.pdf .

EPA (2018d). Strategic plan to promote the development and implementation of alternative test methods within the TSCA program. U.S. Environmental protection agency. EPA-740-R1-8004. Available at: https://www.epa.gov/sites/default/files/2018-06/documents/epa_alt_strat_plan_6-20-18_clean_final.pdf .

EPA (2019a). First annual conference on the state of the science on development and use of new approach methods (NAMs) for chemical safety testing. Available at: https://www.epa.gov/chemical-research/first-annual-conference-state-science-development-and-use-new-approach-methods-0 .

EPA (2019b). List of alternative test methods and strategies (or new approach methodologies [NAMs]). First Update: December 5th, 2019. Office of Pollution Prevention and Toxics, U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/production/files/2019-12/documents/alternative_testing_nams_list_first_update_final.pdf .

EPA (2019c). Significant new use Rules on certain chemical substances . Action . Washington, DC: Final Rule, Agency: Environmental Protection Agency . 84 FR 13531 (April 5, 2019) (to be codified at 40 CFR 9 and 721).

EPA (2020a). Annual reports on PRIA implementation. Available at: https://www.epa.gov/pria-fees/annual-reports-pria-implementation .

EPA (2020b). EPA conference on the state of science on development and use of NAMs for chemical safety testing. Available at: https://www.epa.gov/chemical-research/epa-conference-state-science-development-and-use-nams-chemical-safety-testing#1 .

EPA (2020c). Guidance for waiving acute dermal toxicity tests for pesticide technical chemicals & supporting retrospective analysis . Office of Pesticide Programs . U.S. Environmental Protection Agency. EPA 705-G-2020-3722. Available at: https://www.epa.gov/sites/default/files/2021-01/documents/guidance-for-waiving-acute-dermal-toxicity.pdf .

EPA (2020d). Hazard characterization of isothiazolinones in support of FIFRA registration review. Available at: https://www.regulations.gov/document/EPA-HQ-OPP-2013-0605-0051 .

EPA (2020e). New approach methodologies (NAMs) factsheet. Available at: https://www.epa.gov/sites/default/files/2020-07/documents/css_nams_factsheet_2020.pdf .

EPA (2020f). New approach methods and reducing the use of laboratory animals for chronic and carcinogenicity testing. Available at: https://yosemite.epa.gov/sab/sabproduct.nsf/LookupWebProjectsCurrentBOARD/2D3E04BC5A34DCDE8525856D00772AC1?OpenDocument .

EPA (2020g). Pesticide registration review; draft human Health and ecological risk Assessments for several Pesticides for several isothiazolinones . Action. Notice, Agency: Washington, DC: Environmental Protection Agency . 85 FR 28944 (May 14, 2020).

EPA (2020h). Revocation of significant new use Rule for a certain chemical substance (P-16-581) . Action: Proposed rule . Agency: Environmental Protection Agency . 85 FR 52274 (Aug. 25, 2020) (to be codified at 40 CFR 721).

EPA (2020i). The use of new approach methodologies (NAMs) to derive extrapolation factors and evaluate developmental neurotoxicity for human health risk assessment. Available at: https://www.regulations.gov/document/EPA-HQ-OPP-2020-0263-0033 .

EPA (2021a). Accelerating the pace of chemical risk assessment (APCRA). Available at: https://www.epa.gov/chemical-research/accelerating-pace-chemical-risk-assessment-apcra .

EPA (2021b). Adopting 21st-century science methodologies—metrics. Available at: https://www.epa.gov/pesticide-science-and-assessing-pesticide-risks/adopting-21st-century-science-methodologies-metrics .

EPA (2021c). Chlorothalonil: Revised human health draft risk assessment for registration review. Available at: https://www.regulations.gov/document/EPA-HQ-OPP-2011-0840-0080 .

EPA (2021d). List of alternative test methods and strategies (or new approach methodologies [NAMs]). Second Update: February 4th, 2021. Office of Pollution Prevention and Toxics, U.S. Environmental Protection Agency . Available at: https://www.epa.gov/sites/default/files/2021-02/documents/nams_list_second_update_2-4-21_final.pdf .

EPA (2021e). New approach methods work plan (v2). U.S. Environmental Protection Agency. EPA/600/X-21/209. Available at: https://www.epa.gov/system/files/documents/2021-11/nams-work-plan_11_15_21_508-tagged.pdf .

EPA (2021f). Order under section 4(a)(2) of the toxic substances control Act - 1,1,2-trichlorethane. Available at: https://www.epa.gov/sites/default/files/2021-01/documents/tsca_section_4a2_order_for_112-trichloroethane_on_ecotoxicity_and_occupational_exposure.pdf .

EPA (2022a). Strategic vision for adopting new approach methodologies. Available at: https://www.epa.gov/pesticide-science-and-assessing-pesticide-risks/strategic-vision-adopting-new-approach .

EPA (2022b). TSCA new chemicals collaborative research effort 3-9-22 clean docket version. Available at: https://www.regulations.gov/document/EPA-HQ-OPPT-2022-0218-0004 .

EPA (2022c). TSCA section 4 test orders. Available at: https://www.epa.gov/assessing-and-managing-chemicals-under-tsca/tsca-section-4-test-orders .

Escher, S. E., Partosch, F., Konzok, S., Jennings, P., Luijten, M., and Kienhuis, A. (2022). Development of a roadmap for action on new approach methodologies in risk assessment. EFSA Support 19. doi:10.2903/sp.efsa.2022.EN-7341

EU (2010). Directive 2010/63/EU of the European Parliament and of the Council of 22 September 2010 on the protection of animals used for scientific purposes. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32010L0063 .

EU (2012). Regulation (Eu) No 528/2012 of the European Parliament and of the Council of 22 May 2012 concerning the making available on the market and use of biocidal products. OJL 167/1.

EU (2013a). Commission Communication in the framework of the implementation of Commission Regulation (EU) No 283/2013 of 1 March 2013 setting out the data requirements for active substances, in accordance with Regulation (EC) No 1107/2009 of the European Parliament a. OJ C 95/1.

EU (2013b). Commission communication in the framework of the implementation of Commission Regulation (EU) No 284/2013 of 1 March 2013 setting out the data requirements for plant protection products, in accordance with Regulation (EC) No 1107/2009 of the European Parl. OJ C 95/21.

EU (2013c). Commission Regulation (EU) No 283/2013 of 1 March 2013 setting out the data requirements for active substances, in accordance with Regulation (EC) No 1107/2009 of the European Parliament and of the Council concerning the placing of plant protection produc. OJ L 93/1.

EU (2013d). Commission Regulation (EU) No 284/2013 of 1 March 2013 setting out the data requirements for plant protection products, in accordance with Regulation (EC) No 1107/2009 of the European Parliament and of the Council concerning the placing of plant protectio. OJ L 93/85.

EURL ECVAM (n.d). Tsar - tracking system for alternative methods towards regulatory acceptance. Available at: https://tsar.jrc.ec.europa.eu/ .

Fritsche, E., Crofton, K. M., Hernandez, A. F., Hougaard Bennekou, S., Leist, M., Bal-Price, A., et al. (2017). OECD/EFSA workshop on developmental neurotoxicity (DNT): The use of non-animal test methods for regulatory purposes. ALTEX 34, 311–315. doi:10.14573/altex.1701171

Hamm, J., Allen, D., Ceger, P., Flint, T., Lowit, A., O’Dell, L., et al. (2021). Performance of the GHS mixtures equation for predicting acute oral toxicity. Regul. Toxicol. Pharmacol. 125, 105007. doi:10.1016/j.yrtph.2021.105007

HC (2006). Use site category (DACO tables). Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/pesticides-pest-management/registrants-applicants/product-application/use-site-category-daco-tables.html .

HC (2013a). Guidance for waiving or bridging of mammalian acute toxicity tests for pesticides. The Health Canada Pest Management Regulatory Agency. Available at: https://www.canada.ca/content/dam/hc-sc/migration/hc-sc/cps-spc/alt_formats/pdf/pubs/pest/pol-guide/toxicity-guide-toxicite/toxicity-guide-toxicite.eng.pdf .

HC (2013b). Use-site category (USC) definitions for conventional chemical pesticides. Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/pesticides-pest-management/registrants-applicants/product-application/use-site-category-daco-tables/definitions-conventional-chemical-pesticides.html .

HC (2016a). Chemicals management plan risk assessment Toolbox,” in fact sheet series: Topics in risk assessment of substances under the Canadian Environmental Protection Act, 1999. ( CEPA 1999) Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/fact-sheets/chemicals-management-plan-risk-assessment-toolbox.html .

HC (2016b). Fact sheet series: Topics in risk assessment of substances under the Canadian Environmental Protection Act, 1999 (CEPA 1999). Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/canada-approach-chemicals/risk-assessment.html .

HC (2016c). Science approach document: Threshold of toxicological concern (TTC)-based approach for certain substances. Available at: https://www.ec.gc.ca/ese-ees/326E3E17-730A-4878-BC25-D07303A4DC13/HC TTC SciAD EN 2017-03-23.pdf .

HC (2016d). Strategic plan 2016-2021. The health Canada pest management regulatory agency. Available at: https://www.canada.ca/content/dam/hc-sc/migration/hc-sc/cps-spc/alt_formats/pdf/pubs/pest/corp-plan/strat-plan/strat-plan-eng.pdf .

HC (2017). Acute dermal toxicity study waiver. Science policy note SPN2017-03. The Health Canada Pest Management Regulatory Agency. Available at: https://www.canada.ca/content/dam/hc-sc/documents/services/consumer-product-safety/reports-publications/pesticides-pest-management/policies-guidelines/science-policy-notes/2017/acute-dermal-toxicity-waiver-spn2017-03-eng.pdf .

HC (2018). The rapid screening approach. Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/chemicals-management-plan/initiatives/rapid-screening-approach-chemicals-management-plan.html .

HC (2019). Evaluation of the use of toxicogenomics in risk assessment at health Canada: An exploratory document on current health Canada practices for the use of toxicogenomics in risk assessment . Health Canada: Toxicogenomics Working Group . Available at: https://www.canada.ca/en/health-canada/services/publications/science-research-data/evaluation-use-toxicogenomics-risk-assessment.html

HC (2020). 2019-2020 RCC work plan: Pesticides. Available at: https://www.canada.ca/en/health-canada/corporate/about-health-canada/legislation-guidelines/acts-regulations/canada-united-states-regulatory-cooperation-council/work-plan-crop-protection-2019-2020.html .

HC (2021a). A framework for risk assessment and risk management of pest control products. The Health Canada Pest Management Regulatory Agency. Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/reports-publications/pesticides-pest-management/policies-guidelines/risk-management-pest-control-products.html .

HC (2021b). Background paper: Evolution of the existing substances risk assessment program under the Canadian environmental protection Act, 1999. CMP science committee. Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/chemicals-management-plan/science-committee/meeting-records-reports/background-paper-evolution-existing-substances-risk-assessment-program-canadian-environmental-protection-act-1999.html .

HC (2021c). Guidance document for the notification and testing of new chemicals and polymers. Available at: https://www.canada.ca/en/environment-climate-change/services/managing-pollution/evaluating-new-substances/chemicals-polymers/guidance-documents/guidelines-notification-testing.html .

HC (2021d). Guidance for developing datasets for conventional pest control product applications. data codes for parts 1, 2, 3 , 4, 5, 6, 7 and 10. Updated 2021. The Health Canada Pest Management Regulatory Agency. Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/reports-publications/pesticides-pest-management/policies-guidelines/guidance-developing-applications-data-codes-parts-1-2-3-4-5-6-7-10.html .

HC (2021e). Pest Management Regulatory Agency (PMRA) 2019-2020 annual report. Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/reports-publications/pesticides-pest-management/corporate-plans-reports/annual-report-2019-2020.html .

HC (2021f). Science approach document: Bioactivity exposure ratio : Application in priority setting and risk assessment. 1–58. Available at: https://www.canada.ca/en/environment-climate-change/services/evaluating-existing-substances/science-approach-document-bioactivity-exposure-ratio-application-priority-setting-risk-assessment.html .

HC (2022a). Approaches for addressing data needs in risk assessment,” in Fact sheet series: Topics in risk assessment of substances under the Canadian Environmental Protection Act, 1999. CEPA 1999) Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/fact-sheets/approaches-data-needs-risk-assessment.html .

HC (2022b). Pest management regulatory agency (PMRA) 2020-2021 annual report . Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/reports-publications/pesticides-pest-management/corporate-plans-reports/annual-report-2020-2021.html

HC (2022c). Science approach documents. Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/science-approach-documents.html .

HC (2022d). The risk assessment process for existing substances. Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/canada-approach-chemicals/risk-assessment.html#s3 .

HC (2022e). Chemicals management plan. Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/chemicals-management-plan.html .

HC (2022f). New substances program. Available at: https://www.canada.ca/en/environment-climate-change/services/managing-pollution/evaluating-new-substances.html .

Hilton, G. M., Adcock, C., Akerman, G., Baldassari, J., Battalora, M., Casey, W., et al. (2022). Rethinking chronic toxicity and carcinogenicity assessment for agrochemicals project (ReCAAP): A reporting framework to support a weight of evidence safety assessment without long-term rodent bioassays. Regul. Toxicol. Pharmacol. 131, 105160. doi:10.1016/j.yrtph.2022.105160

HSE (n.d). Vertebrate testing (toxicology). UK health and safety executive. Available at: https://www.hse.gov.uk/pesticides/pesticides-registration/applicant-guide/vertebrate-testing.htm .

ICCVAM (2018). A strategic roadmap for establishing new approaches to evaluate the safety of chemicals and medical products in the United States. doi:10.22427/NTP-ICCVAM-ROADMAP2018

Ingenbleek, L., Lautz, L. S., Dervilly, G., Darney, K., Astuto, M. C., Tarazona, J., et al. (2020). “Risk assessment of chemicals in food and feed: Principles, applications and future perspectives,” in Environmental pollutant exposures and public health . Editor R. M. Harrison, 1–38. doi:10.1039/9781839160431-00001

ITA (n.d). U.S.-Canada regulatory cooperation Council. International trade administration. Available at: https://www.trade.gov/rcc .

KEMI (2021). Data protection for test and study reports . Stockholm: Swedish Chemicals Agency . Available at: https://www.kemi.se/en/pesticides-and-biocides/plant-protection-products/apply-for-authorisation-for-plant-protection-products/data-protection .

Knight, J., Rovida, C., Kreiling, R., Zhu, C., Knudsen, M., and Hartung, T. (2021). Continuing animal tests on cosmetic ingredients for REACH in the EU. ALTEX 38, 653–668. doi:10.14573/ALTEX.2104221

Krewski, D., Andersen, M. E., Tyshenko, M. G., Krishnan, K., Hartung, T., Boekelheide, K., et al. (2020). Toxicity testing in the 21st century: Progress in the past decade and future perspectives. Arch. Toxicol. 94, 1–58. doi:10.1007/s00204-019-02613-4

Ladics, G. S., Price, O., Kelkar, S., Herkimer, S., and Anderson, S. (2021). A weight-of-the-evidence approach for evaluating, in lieu of animal studies, the potential of a novel polysaccharide polymer to produce lung overload. Chem. Res. Toxicol. 34, 1430–1444. doi:10.1021/acs.chemrestox.0c00301

Lanzoni, A., Castoldi, A. F., Kass, G. E., Terron, A., De Seze, G., Bal‐Price, A., et al. (2019). Advancing human health risk assessment. EFSA J. 17, e170712. doi:10.2903/j.efsa.2019.e170712

Linke, B., Mohr, S., Ramsingh, D., and Bhuller, Y. (2017). A retrospective analysis of the added value of 1-year dog studies in pesticide human health risk assessments. Crit. Rev. Toxicol. 47, 581–591. doi:10.1080/10408444.2017.1290044

Luechtefeld, T., Maertens, A., Russo, D. P., Rovida, C., Zhu, H., and Hartung, T. (2016a). Analysis of Draize eye irritation testing and its prediction by mining publicly available 2008-2014 REACH data. ALTEX 33, 123–134. doi:10.14573/ALTEX.1510053

Luechtefeld, T., Maertens, A., Russo, D. P., Rovida, C., Zhu, H., and Hartung, T. (2016b). Analysis of public oral toxicity data from REACH registrations 2008-2014. ALTEX - Altern. Anim. Exp. 33, 111–122. doi:10.14573/ALTEX.1510054

Luechtefeld, T., Maertens, A., Russo, D. P., Rovida, C., Zhu, H., and Hartung, T. (2016c). Analysis of publically available skin sensitization data from REACH registrations 2008-2014. ALTEX 33, 135–148. doi:10.14573/altex.1510055

Luechtefeld, T., Marsh, D., Rowlands, C., and Hartung, T. (2018). Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility. Toxicol. Sci. 165, 198–212. doi:10.1093/TOXSCI/KFY152

McGee Hargrove, M., Parr-Dobrzanski, B., Li, L., Constant, S., Wallace, J., Hinderliter, P., et al. (2021). Use of the MucilAir airway assay, a new approach methodology, for evaluating the safety and inhalation risk of agrochemicals. Appl. Vitro Toxicol. 7, 50–60. doi:10.1089/aivt.2021.0005

NAFTA TWG (2016). NAFTA TWG five-year strategy 2016 – 2021. The North American free trade agreement’s (NAFTA) technical working group (TWG) on pesticides. Available at: https://www.canada.ca/content/dam/hc-sc/migration/hc-sc/cps-spc/alt_formats/pdf/pubs/pest/corp-plan/nafta-alena-2016-2021/nafta-strategy-2016-2021-eng.pdf .

NICEATM (2021). Alternative methods accepted by US agencies. Available at: https://ntp.niehs.nih.gov/whatwestudy/niceatm/accept-methods/index.html?utm_source=direct&utm_medium=prod&utm_campaign=ntpgolinks&utm_term=regaccept .

NRC (2007). Toxicity testing in the 21st century: A vision and a strategy . Washington, DC: National Research Council. The National Academies Press . doi:10.17226/11970

OECD (2004). Test No. 428: Skin absorption: In vitro method . OECD Guidelines for the Testing of Chemicals, Section 4 ( Paris: OECD Publishing ). doi:10.1787/9789264071087-en

OECD (2017). Guidance document on considerations for waiving or bridging of mammalian acute toxicity tests . OECD Series on Testing and Assessment. Paris: OECD Publishing , 237. doi:10.1787/9789264274754-en

OECD (2019). Decision of the Council concerning the mutual acceptance of data in the assessment of chemicals. OECD/LEGAL/0194 . Available at: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0194 .

OECD (2021a). Guideline No. 497: Defined approaches on skin sensitisation . OECD Guidelines for the Testing of Chemicals, Section 4. Paris: OECD Publishing . doi:10.1787/b92879a4-en

OECD (2021b). Work plan for the test guidelines programme (TGP) - as of July 2021. Available at: https://www.oecd.org/env/ehs/testing/work-plan-test-guidelines-programme-july-2021.pdf .

Paul Friedman, K., Gagne, M., Loo, L.-H., Karamertzanis, P., Netzeva, T., Sobanski, T., et al. (2020). Utility of in vitro bioactivity as a lower bound estimate of in vivo adverse effect levels and in risk-based prioritization. Toxicol. Sci. 173, 202–225. doi:10.1093/toxsci/kfz201

PSCI (n.d.). Webinar series on the use of new approach methodologies (NAMs) in risk assessment. PETA Science Consortium international. Available at: https://www.thepsci.eu/nam-webinars/ .

Republika Slovenija (2022). Vloga za consko registracijo fitofarmacevtskega sredstva (FFS). Available at: https://www.gov.si/zbirke/storitve/vloga-za-notifikacijo-vloge-za-consko-registracijo-fitofarmacevtskih-sredstev/ .

Sachana, M., Bal-Price, A., Crofton, K. M., Bennekou, S. H., Shafer, T. J., Behl, M., et al. (2019). International regulatory and scientific effort for improved developmental neurotoxicity testing. Toxicol. Sci. 167, 45–57. doi:10.1093/toxsci/kfy211

Simmons, S. O., and Scarano, L. (2020). Identification of new approach methodologies (NAMs) for placement on the TSCA 4(h)(2)(C) list: A proposed NAM nomination form. Presentation at PSCI Webinar Series on the Use of New Approach Methodologies (NAMs) in Risk Assessment. Available at: https://www.thepsci.eu/wp-content/uploads/2020/09/Simmons_Identification-of-New-Approach-Methodologies-NAMs.pdf .

SZU (n.d.). Information for applicant - vertebrate studies . Vinohrady: The Czech National Institute of Public Health . Available at: http://www.szu.cz/topics/information-for-applicant-vertebrate-studies?lang=2

US EPA, OCSPP, and OPP (2016). US EPA - guidance for waiving acute dermal toxicity tests for pesticide formulations & supporting retrospective analysis. Available at: https://www.epa.gov/pesticide-registration/bridging-or-waiving-data-requirements%0Ahttps://www.epa.gov/sites/production/files/2016-11/documents/acute-dermal-toxicity-pesticide-formulations_0.pdf .

van der Zalm, A. J., Barroso, J., Browne, P., Casey, W. M., Gordon, J., Henry, T. R., et al. (2022). A framework for establishing scientific confidence in new approach methodologies. Arch. Toxicol. doi:10.1007/s00204-022-03365-4

AOP Adverse Outcome Pathways

APCRA Accelerating the Pace of Chemical Risk Assessment

BraCVAM Brazilian Centre for the Validation of Alternative Methods

CaCVAM Canadian Centre for the Validation of Alternative Methods

CATSAC Chemistry and Acute Toxicology Science Advisory Council

CCA Council of Canadian Academies

CCAAM Canadian Centre for Alternatives to Animal Methods

CEPA Canadian Environmental Protection Act

CFR Code of Federal Regulations

CLP Classification, labelling and packaging of substances and mixtures

CMP Chemicals Management Plan

CPSC US Consumer Products Safety Commission

CR Conditionally required

CSS EU Chemicals Strategy for Sustainability

DACO Data-code

DAF Dermal absorption factor

DNT Developmental neurotoxicity

DSL Domestic Substances List

EC European Commission

ECHA European Chemicals Agency

EFSA European Food Safety Authority

EPA US Environmental Protection Agency

ESRAP Existing Substances Risk Assessment Program

EU European Union

EURL ECVAM EU Reference Laboratory for alternatives to animal testing

FD&C Federal Food, Drug, and Cosmetic Act

FHSA Federal Hazardous Substances Act

FIFRA Federal Insecticide, Fungicide, and Rodenticide Act

FQPA Food Quality Protection Act

GHS Globally Harmonised System of Classification and Labelling of Chemicals

GLP Good Laboratory Practice

HASPOC Hazard and Science Policy Council

HC Health Canada

HCI High-content imaging

HECSB Healthy Environments and Consumer Safety Branch

IATA Integrated Approaches to Testing and Assessment

ICATM International Cooperation on Alternative Test Methods

ICCVAM US Interagency Coordinating Committee for the Validation of Alternative Methods

ITC Interagency Testing Committee

IVCM In vitro comparative metabolism

JaCVAM Japanese Centre for the Validation of Alternative Methods

KoCVAM Korean Centre for the Validation of Alternative Methods

MAD Mutual Acceptance of Data

MEA NFA Microelectrode array network formation array

NAFTA TWG North American Free Trade Agreement Technical Working Group

NAM New Approach Methodologies

NICEATM US National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods

NRC National Resource Council

NSN New Substances Notification

NSNR New Substances Notification Regulations

NSP New Substances Program

NTP US National Toxicology Program

OCSPP Office of Chemical Safety and Pollution Prevention

OECD Organisation for Economic Co-operation and Development

OPP Office of Pesticide Programs

OPPT Office of Pollution Prevention and Toxics

ORD Office of Research and Development

PCPA Pest Control Products Act

PMN Pre-Manufacture Notice

PMRA Pest Management Regulatory Agency

PPP Plant Protection Products

PPR EFSA Panel on Plant Protection Products and their Residues

QSAR Quantitative Structure-Activity Relationship

RCC Canada-US Regulatory Co-operation Council

REACH Registration, Evaluation, Authorisation and Restriction of Chemicals

ReCAAP Rethink Chronic toxicity and carcinogenicity Assessment for Agrochemicals

SAP Scientific Advisory Panel

SC Statutes of Canada

SciAD Science Approach Documents

SIEFs Substance Information Exchange Fora

SNUN Significant New Use Notice

SNUR Significant New Use Rule

SOR Statutory Orders and Regulations

TG Test Guidelines

TSAR Tracking System for Alternative Methods

TSCA Toxic Substances Control Act

TTC Threshold of Toxicological Concern

UNECE United Nations Economic Commission for Europe

US United States

USC US Code

USMCA US-Mexico-Canada Agreement

WoE Weight of (scientific) Evidence

Keywords: new approach methodologies (NAMs), in vitro , in silico , risk assesment, toxicity testing, industrial chemicals, pesticides

Citation: Stucki AO, Barton-Maclaren TS, Bhuller Y, Henriquez JE, Henry TR, Hirn C, Miller-Holt J, Nagy EG, Perron MM, Ratzlaff DE, Stedeford TJ and Clippinger AJ (2022) Use of new approach methodologies (NAMs) to meet regulatory requirements for the assessment of industrial chemicals and pesticides for effects on human health. Front.Toxicol. 4:964553. doi: 10.3389/ftox.2022.964553

Received: 08 June 2022; Accepted: 28 July 2022; Published: 01 September 2022.

Reviewed by:

Copyright © 2022 Stucki, Barton-Maclaren, Bhuller, Henriquez, Henry, Hirn, Miller-Holt, Nagy, Perron, Ratzlaff, Stedeford and Clippinger. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Andreas O. Stucki, [email protected]

This article is part of the Research Topic

Chemical Testing Using New Approach Methodologies (NAMs)

National Academies Press: OpenBook

New Approach Methods (NAMs) for Human Health Risk Assessment: Proceedings of a Workshop–in Brief (2022)

Chapter: new approach methods (nams) for human health risk assessment: proceedings of a workshop - in brief.

Proceedings of a Workshop

New Approach Methods (NAMs) for Human Health Risk Assessment

Proceedings of a workshop—in brief.

Animal testing is often used to assess the potential risks, uses, and environmental impacts of chemicals. New Approach Methods (NAMs) are technologies and approaches (including computational modeling, in vitro assays, and testing using alternative animal species) that can inform hazard and risk assessment decisions without the use of animal testing. Two landmark publications from the National Research Council (NRC) 1 and the National Academies of Sciences, Engineering, and Medicine 2 provided recommendations for developing, improving, and validating NAMs and outlined opportunities to best integrate and use the emerging results in evaluating chemical risk. An ad hoc National Academies committee 3 will build on these efforts by reviewing the variability and relevance of existing mammalian toxicity tests for human health risk assessment to inform the development of approaches for validation and establishing scientific confidence in using NAMs. As part of its work, the committee organized a 1-day virtual public workshop, held on December 9, 2021, 4 to address the potential utility and expectations for the future use of NAMs in risk assessment and to reflect on the challenges to their implementation. The workshop addressed the following critical questions:

  • How are traditional toxicity studies used in informing chemical safety decisions?
  • What do we know about the variability and concordance of traditional mammalian toxicity studies?
  • What are the needs and expectations of different stakeholders?

During the workshop, experts from academia, industry, government, and other organizations discussed current scientific knowledge with regard to traditional toxicity studies and NAMs. Recorded presentations were made available via the event webpage 5 before the workshop and were briefly summarized during each of four panel discussions.

Kate Guyton, National Academies Project Director, and Weihsueh Chiu, Committee Chair, introduced the workshop topics, broader study task, and committee members. They explained that the workshop would provide important information gathering opportunities and inform the committee’s work as it develops a consensus study report.

__________________

1 NRC (National Research Council). 2007. Toxicity testing in the 21st century: A vision and a strategy . Washington, DC: The National Academies Press. https://doi.org/10.17226/11970 .

2 NASEM (National Academies of Sciences, Engineering, and Medicine). 2017. Using 21st century science to improve risk-related evaluations . Washington, DC: The National Academies Press. https://doi.org/10.17226/24635 .

3 See https://www.nationalacademies.org/our-work/variability-and-relevance-of-current-laboratory-mammalian-toxicity-tests-and-expectations-for-new-approach-methods--nams--for-use-in-human-health-risk-assessment .

4 See https://www.nationalacademies.org/event/12-09-2021/new-approach-methods-nams-for-human-health-risk-assessment-workshop-1 .

Image

THE USE OF TRADITIONAL MAMMALIAN TOXICITY STUDIES IN INFORMING CHEMICAL SAFETY DECISIONS

Chiu and Tracey Woodruff, University of California, San Francisco, moderated the first panel addressing the use of traditional mammalian toxicity studies in informing chemical safety decisions.

Thomas Burke, Johns Hopkins University, provided a brief overview of risk assessment, including referencing the 1983 NRC report Risk Assessment in the Federal Government: Managing the Process 6 and the 2008 NRC report Science and Decisions: Advancing Risk Assessment . 7 Burke noted that risk assessment serves as an essential public policy tool to inform decisions about human health. There are multiple types of risk assessment, ranging from screening assessments to comprehensive high-level assessments that present a synthesis of the evidence including human, animal, and other toxicity testing, including NAMs.

Laboratory mammalian toxicity testing has been the cornerstone of risk science, providing a strong correlation with human disease, including cancer. However, animal studies have limitations. For example, animal studies are typically conducted at high doses, which may not be relevant to humans. Although animal studies can examine mixtures, they do not consider other exposures present in the environment that may contribute to adverse effects.

To protect public health, Burke noted, future risk assessments will need to use the full range of available data, draw on innovative methods to integrate diverse data streams, and consider health endpoints that reflect the range of subtle effects and morbidities observed in human populations. Given these factors, there is a need to reframe chemical risk assessment to be more clearly aligned with the public health goal of minimizing exposures associated with disease, he said.

The demand for chemical-specific assessments remains and whole animal testing will likely continue to play an essential role in the support and validation of other evidence streams, in dose–response modeling, and in the development of points of departure (PODs; the dose–response point from which human health risk calculations are made). There is interest in moving away from “bright line” estimates of population risk and shifting toward a public health approach and broader consideration of environmental health risks such as environmental justice, climate change, and cumulative impacts. It is time for a unified approach, Burke added, that considers population vulnerabilities, background exposures, and impacts, allowing for a more complete understanding of the variability of the population risk, as discussed in the 2008 NRC report Science and Decisions . New methods can address gaps and strengthen the tools for mode of action, population variability, exposure assessment, and risk characterization. Burke closed by noting the committee’s tremendous opportunity to advance science and refine the practice and application of risk assessment.

Sharon Munn, Joint Research Centre, discussed hazard identification of endocrine disruptors (i.e., exogenous substance or mixture that alters functions of the endocrine system, causing adverse health effects), focusing on efforts in the European Union. The European Union developed criteria and guidance for endocrine disruptors in 2018 and a Chemicals Strategy for Sustainability 8 in 2020, which bans endocrine disrupting chemicals from consumer products. The Chemicals Strategy also strengthens information requirements and accelerates the need for data on endocrine disrupting chemicals, including requiring assessment of more chemicals for critical hazards. Munn noted that this will be a significant challenge given the thousands of chemicals to assess within a tight timeframe. The International Programme on Chemical Safety 9 has also developed criteria around endocrine disruptors, including relevance to humans, specificity, data generated according to international standards, and systematic literature review.

To accelerate the assessment of chemicals for endocrine disrupting properties, Munn shared, there is a need for more methods to investigate mechanisms of action and NAMs can play a role in this capacity. Currently, NAMs, in combination with in vivo data, can indicate adverse effects in the intact organism; however, these approaches are not a replacement for animal studies. Munn added that there is a need for data on more sensitive endocrine endpoints. There are opportunities to move forward in advancing NAMs, including ongoing work to develop complex NAMs with 3D tissues, 10 which may contribute to the understanding of adversity in the context of in vitro effects.

Vincent Cogliano, California Environmental Protection Agency, discussed hazard identification and the dose–response of carcinogens, noting that animal studies have served as the backbone of cancer risk assessment. There is usually insufficient human evidence available to support the risk assessment of suspected carcinogens. In the vast majority of cases, animal studies have been used in cancer hazard identification and cancer dose–response assessment.

6 NRC. 1983. Risk assessment in the federal government: Managing the process . Washington, DC: National Academy Press. https://www.nap.edu/catalog/366 .

7 NRC. 2008. Science and decisions: Advancing risk assessment . Washington, DC: The National Academies Press. https://www.nap.edu/catalog/12209 .

8 EC (European Commission). 2020. Chemicals strategy for sustainability towards a toxic-free environment . https://ec.europa.eu/environment/pdf/chemicals/2020/10/Strategy.pdf .

9 See https://inchem.org .

10 NASEM. 2017. Using 21st century science to improve risk-related evaluations . Washington, DC: The National Academies Press. https://www.nap.edu/catalog/24635 .

Animal studies thus provide a basis to identify a chemical as a carcinogen and to take regulatory action to reduce human exposures. Additionally, they can be used to investigate agents not covered by human studies, identify life stage windows of sensitivity and susceptible populations, and quantify differences in risk in evaluating complex exposures.

Cogliano added that the current methods for conducting linear and nonlinear extrapolation to low doses could inform a framework for NAMs. It is also important to utilize NAMs to investigate the effects of chemicals not covered by human studies. While there are high expectations that NAMs will allow for the prediction of human environmental risk, Cogliano noted the importance of not losing the current capability afforded by animal studies, which he termed “actionable evidence.”

David Dorman, North Carolina State University, discussed two National Academies’ reports: A Class Approach to Hazard Assessment of Organohalogen Flame Retardants (2019) 11 and Application of Modern Toxicology Approaches to Predicting Acute Toxicity for Chemical Defense (2015). 12 The authoring committees faced similar risk assessment challenges in these studies: the need to evaluate a large number of chemicals with limited traditional toxicology data or human data to support hazard identification. In both cases, the committees recognized that sole reliance on mammalian studies would limit the ability to screen or classify large numbers of chemicals. The committees viewed NAMs and read-across approaches (i.e., approaches that extrapolate data from one chemical to another) as important tools for predicting toxicity. The committees both recommended a tiered approach in which mammalian studies would be included but conducted for fewer chemicals.

In the 2019 report, the committee was tasked with examining approaches to evaluating 150 different chemicals used as flame retardants; these chemicals are among >20,000 halogenated compounds. To address this challenge, the committee examined the problem through a class and subclass approach, developing 14 defined subclasses of these chemicals based on physiological properties and biology. The literature was surveyed to identify the availability and extent of relevant data to inform the hazard assessment of the subclasses. The committee recommended a tiered approach to testing to fill gaps in data, relying on NAMs (computational modeling, in vitro assays, and testing using alternative animal species) to identify endpoints of interest and anchor chemical(s) for targeted mammalian toxicity studies. In the 2015 report, the Department of Defense (DoD) asked the committee to develop a framework to examine a variety of chemicals to identify “bad actors.” DoD wanted to screen a range of potential chemical-warfare agents to identify chemicals of interest with either a high degree of toxicity or those with low toxicity. The committee also developed a framework to aid in this process, recognizing that the one chemical at a time approach was not feasible. The framework recommended in the 2015 report proposes a variety of data and models as tools to support a prioritization strategy to address the acute toxicity of chemicals, noting that traditional mammalian toxicity studies would be preferred when other approaches were deemed inadequate.

Dorman discussed the inherent policy decisions embedded in the conceptual frameworks offered in both reports, for example, assigning chemicals to categories, the level of evidence required to make reliable decisions, and a reliance on surrogate outcomes. 13

Panel Discussion

Several panelists noted that animal studies are the cornerstone of the current practice of risk assessment. The overwhelming predominance of evidence has come from animal studies and there is confidence within the scientific community in using them to support decision-making. The panelists addressed the factors and challenges reinforcing the continued use of traditional animal toxicity tests in risk assessment and the limitations of traditional animal toxicity tests, including for predicting risks for all populations. Munn noted that traditional mammalian studies offer data about the intact organism and provide toxicokinetic context. In contrast, Dorman added that NAMs do not allow for an examination into injury and repair in a holistic way. With NAMs, there will be a general leap of faith that the results will show more clinical relevance to humans. “It will be important to develop the confidence in NAMs that we have with animal studies, despite their limitations,” he said. Also, studies of interindividual variability in humans indicate that there may be more variability than previously understood, a complicating factor for NAMs, Munn noted.

The panelists commented that NAMs will offer different data than animal studies; researchers will both gain and lose information using these approaches. With NAMs, Dorman noted, cellular-level responses will be examined, and these data can be used to make public health decisions. As such, Dorman shared that it is also important to create policies around NAMs to support the science. Cogliano was hopeful that NAMs could provide information on human

11 NASEM. 2019. A class approach to hazard assessment of organohalogen flame retardants . Washington, DC: The National Academies Press. https://www.nap.edu/catalog/25412 .

12 NASEM. 2015. Application of modern toxicology approaches to predicting acute toxicity for chemical defense . Washington, DC: The National Academies Press. https://www.nap.edu/catalog/21775 .

13 An outcome that can be observed sooner, at lower cost, or less invasively than the true outcome, and that enables valid inferences about the effect of intervention on the true outcome. Staner, L. 2006. Surrogate outcomes in neurology, psychiatry, and psychopharmacology. Dialogues in Clinical Neuroscience 8(3):345–352. https://doi.org/10.31887/DCNS.2006.8.3/lstaner .

variation and may serve as the backbone of future regulations. As Burke stated, NAMs offer a tremendous opportunity to reduce the use of animal testing and better utilize the resources currently available.

Burke added that current approaches focus on developing a point of departure to drive a single estimate to define acceptable risk, which can ignore other aspects of co-exposures and population susceptibility. COVID-19 highlighted the importance of considering exposures in vulnerable populations, and of lowering the impact of exposures on vulnerable populations, rather than having a single acceptable risk number or point of departure. In the future, NAMs could provide an opportunity to increase the understanding of diversity, he added, “to get to the idea of complex exposures in a complex population and design experiments to get at those questions.”

To make NAMs part of actionable evidence, Cogliano noted that there is a need for policy that will reflect a consensus that certain data are indicative of potential health hazards. By integrating NAMs into the decision-making process, there is an opportunity to improve public health, Munn noted. Burke added that regarding NAMs, “once we get to that actionable evidence, we have to be prepared to defend it. And to be very transparent about the inherent limitations of all types of data.”

UNDERSTANDING THE VARIABILITY OF TRADITIONAL MAMMALIAN TOXICITY STUDIES WITH DIFFERENT LEVELS OF COMPLEXITY

Nicole Kleinstreuer, NTP Interagency Center for the Evaluation of Alternative Toxicological Methods, and Holly Davies, Washington State Department of Health, moderated a panel on the variability of traditional mammalian toxicity studies.

David Allen, Integrated Laboratory Systems, LLC, discussed ways to evaluate variability within traditional mammalian toxicity studies and how that information can be used in the development of NAMs. Historically, stakeholders placed a lot of confidence in in vivo toxicity testing. In order to establish confidence in NAMs, they are typically compared to in vivo test methods, Allen noted. However, there is a need to characterize the usefulness and limitations of in vivo methods. Studies examining the variability of acute endpoints have used both qualitative and quantitative approaches. Allen described the results of analyses of in vivo variability within three guideline test methods, for eye and skin irritation in rabbits and acute oral toxicity in rats. Researchers examined the conditional probability of finding the same hazard category with the same chemical tested multiple times. Regarding eye and skin irritation, variability was the greatest when testing mild and moderate irritants, while some corrosive substances were found to be non-irritants, if retested. For acute oral toxicity, the results were variable across categories, while mild toxicity was the most reproducible.

In vivo data have been used to derive thresholds for hazard categorization, precautionary labeling, and performing quantitative risk assessments, Allen noted. Establishing confidence in NAMs could be informed by the consideration of variability in in vivo test methods. In vivo variability is another factor to consider for determining whether NAM concordance with animal data is an appropriate comparison.

Katie Paul Friedman, U.S. Environmental Protection Agency (EPA), discussed qualitative and quantitative analyses of variability among traditional mammalian toxicity studies of the same and different design. EPA, as required by the Toxic Substances Control Act and amendments, currently relies on data from animal tests but is moving to replace the current approaches with NAMs that are validated and shown to be equivalent to, or better than, the replaced animal tests. She noted that, quantitatively, variability in traditional animal toxicity tests is a measure of how far values are spread from the average. Qualitatively, variability concerns whether a specific effect is observed or not (i.e., are there false positives or negatives). Friedman discussed efforts using a range of different modeling approaches and the ToxRefDB v2.0 14 dataset to approximate total variance in systemic effect levels. She noted that the estimate of variance (root mean square error) in curated lowest effect levels (LELs) and/or lowest observed adverse effect level (LOAEL) approaches 0.5 log 10 -mg/kg/day. The work published by Pham et al. (2020), 15 and Pradeep et al. (2020) 16 supported Friedman’s conclusion that variability in in vivo toxicity studies limits the predictive accuracy of NAMs. The maximal R-squared for a NAM-based predictive model of systemic effect levels may be 55%–73% (i.e., as much as one-third of the variance in these data may not be explainable using study descriptors). Understanding that a prediction of an animal systemic effect level within +/– 1 log 10 -mg/kg/day demonstrates a strong NAM is important for acceptance of these approaches for chemical safety assessment. Friedman noted that the construction of NAM-based effect level estimates that offer an equivalent level of public health protection as effect levels derived from animal tests may lead to a reduction in the use of animal testing. In addition, this may support the identification of cases in which animals may provide scientific value. Friedman commented that existing QSAR (quantitative structure-activity relationship) methods for repeat dose PODs may be informative if the intent is to evaluate a large number of chemicals in a short period of time,

14 EPA (U.S. Environmental Protection Agency). 2020. ToxRefDB version 2.0: Improved utility for predictive and retrospective toxicology analyses . https://catalog.data.gov/dataset/toxrefdb-version-2-0-improved-utility-for-predictive-and-retrospective-toxicology-analyses .

15 Pham, L. L., S. Watford, P. Pradeep, M. T. Martin, R. Thomas, R. Judson, R. W. Setzer, and K. P. Friedman. 2020. Variability in in vivo studies: Defining the upper limit of performance for predictions of systemic effect levels. Computational Toxicology 15:100126. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7787987 .

16 Pradeep, P., K. P. Friedman, and R. Judson. 2020. Structure-based QSAR models to predict repeat does toxicity points of departure. Computational Toxicology 16:100139. https://doi.org/10.1016/j.comtox.2020.100139 .

and that work is in progress to support best practices for predicting in vivo PODs at the organ level.

Suzanne Fenton, National Institute of Environmental Health Sciences, discussed variability within and across animal species in traditional mammalian toxicity studies. In addressing environmental contributors to public health issues, healthy rodent models are used in test guideline (TG) studies to identify dose–response, sex- or diet-dependence, developmental time sensitivity, and mechanistic relationships with relevant disease outcomes. Traditional TG studies that are similar in design may use different strains of rats or multiple species, use a dose range that is often much higher than what is relevant to humans, use varied diets, and may not include windows of sensitivity for most developing organ systems. Fenton discussed the sources of variability in animal tests, including those related to diet and water source; sex-specific differences; route of administration; timing of administration; collection of female tissue in various phases of estrous cycle; strain; and species, which can be determinants of the sensitivity to the substance being tested and the propensity to manifest the disease outcome of interest.

Moving forward, Fenton discussed areas for improvement to address sources of variability, for example, controlling for possible environmental contamination, evaluating both sexes equally, and using the most appropriate species to evaluate a particular outcome. Adding endocrine sensitive endpoints to current TG studies to reduce the number of additional studies is also a potential area for improvement. The testing of unhealthy or stressed animals would be beneficial as most testing is done with healthy animals. Bioaccumulation in the offspring through placental and lactational transfer of test compounds is also a variable to be considered. TG studies currently fall short in a number of areas that are of critical public health interest, including breast development and functional assessment, obesity and metabolic diseases, assessment of placental or pregnancy complications, thyroid disease, hypertension, and allergy, asthma, and autoimmune studies. To generate actionable data specific to issues of public health concern, Fenton noted that non-traditional or non-TG toxicity studies may be necessary. Shorter, more predictive in vivo assays along with genetic data could help to predict disease outcomes. Early life exposures and early life endpoints could also be used to predict later life endpoints.

Malcolm MacLeod, University of Edinburgh, summarized the use of systematic reviews and meta-analyses in examining in vivo data. He noted that, compared with human studies, there are important differences in the data structures of animal studies: the number of subjects is typically smaller, and thus variance in sample does not always reflect variance in population. In addition, the number of studies is usually large, and therefore there may be heterogeneity expected between studies rather than within studies (e.g., because of the sex or age of the animals, dose used, or timing of assessments). Between-study heterogeneity can be estimated in different ways, including tau squared (the measure of the dispersion of the various true effect sizes). Thus, if the effect of a chemical is adequately described in a corpus of work, further studies will increase the precision of tau squared but not change its value. MacLeod noted that these approaches can examine whether there is a biological effect, how big the effect may be, among a number of other areas, including whether there is publication bias (i.e., when publication depends on the nature and direction of study results 17 ). A meta-analysis can assess issues such as whether a chemical is reliably hazardous across subjects, ages, and ethnicities. These analyses can examine whether there is a subgroup of subjects who are at greater risk of adverse effects from exposure.

MacLeod discussed the challenges to conducting systematic reviews of animal studies. For example, it can be challenging to conduct these studies, given the size of the available literature and data and that systematic reviews can be time consuming and cumbersome, particularly as they require significant time to extract the data, conduct the analysis, and develop conclusions. However, there are some automation tools (e.g., machine learning, artificial intelligence), that can increase the speed with which these analyses can be conducted, MacLeod noted. Through these new tools, there is the possibility of creating systematic online living evidence summaries.

Kleinstreuer and Davies led a discussion around the variability of traditional mammalian toxicity studies. Allen observed that regulators have indicated that they have strong confidence in in vivo studies to inform specific hazard identification or comprehensive risk assessments. Thus, new approaches to develop these data have been held to a tremendously high standard, including the requirement to repeat the identical outcome of an in vivo method.

Friedman, agreeing with Allen’s comments, discussed her own efforts to predict the point of departure in repeat dose studies, noting that a best-case scenario for replacing repeat dose studies may be to consider what the scientific community wants to accomplish from these studies. Friedman noted that a quantitative evaluation that determines a point of departure, LD 50 , or dose estimate may be needed. She added that if researchers have a specific hazard to predict, there is a need to carefully consider what is identified as the reference chemical.

17 Royle, P., and N. Waugh. 2003. Literature searching for clinical and cost-effectiveness studies used in health technology assessment reports carried out for the National Institute for Clinical Excellence appraisal system. Health Technology Assessment 7(34):iii, ix–x, 1–51. https://pubmed.ncbi.nlm.nih.gov/14609481 .

MacLeod added to the discussion of challenges around variability, specifically, highlighting the experimental replication efforts of results in animal studies. In the context of NAMs, he noted that there is a need to be careful about the current state of in vitro research, as systematic reviews have highlighted significant bias and other issues in the animal studies.

Fenton added that when discussing communities of color, variability between individuals and groups of people is relevant; variability in animal studies based on genetic differences is also expected, just as would be expected in human communities. This translates to how variability is considered in NAMs, especially when results are derived from a single cell type. Fenton noted that, currently, in vitro and in vivo NAMs do a poor job of addressing the complexity of issues faced by communities of color and consideration across a broader dataset may be informative of key issues around race, ethnicity, and diversity. It will be necessary for NAMs to be as good as animal studies, Fenton added.

MacLeod noted that the consistency of effect and heterogeneity between studies are important aspects of variability. The latter is a measure of the difference in the potential biological responses; it is necessary to have a way of embracing heterogeneity, he said.

Davies asked the panelists to discuss the strengths and limitations of traditional animal toxicity tests with regard to reliability and quantitative reproducibility. Allen noted that the largest area of variability is in the subjective qualitative endpoints that are scored by individual technicians and the differences in diet, husbandry, or strain that might limit the analysis. Friedman added that similar limitations in repeat dose studies include pathological scoring and the study design. In these studies, there is a need to approximate the extent of variability. Additionally, many findings underscore observable adverse effects, and it is not clear what pathway is involved. Fenton added that limitations of animal toxicity tests include a lack of best practices around protocols and in reporting the data, problems that will hinder progress in this area.

Davies asked the panelists to comment on how NAMs can help assess chemicals with relatively little or no data. Friedman said that it may be particularly useful to utilize NAMs for data poor substances with little to no in vivo data; in these cases, one can approximate the uncertainty in a NAM-based point of departure. She also noted that using the estimated variability in in vivo data can help to estimate NAMs-based approaches as ranges of values that are reasonable, allowing a movement away from the point estimate. Fenton added that data from NAMs can be compared with what is already known using a rich dataset. This could lead to the ability to prioritize emerging compounds in a class, for example.

The moderators also asked about the extent to which the use of LOAEL or LEL contributes to observed variability in results, instead of using a measure such as the benchmark dose. The panelists discussed the role of NAMs in informing uncertainty factors. As Friedman noted, NAMs can further support an examination of vulnerability in the human population, thus informing uncertainty factors. NAMs allow scientists to inform multiple parts of the uncertainty factor, which is critical, she noted.

Kleinstreuer asked the panelists to discuss the challenges of using traditional mammalian toxicity studies for evaluating NAMs. Allen noted that a significant challenge is demonstrating and providing the information for which adequate confidence can be developed in NAMs; it may be appropriate to examine a combination of NAMs instead. Friedman added that NAMs will offer an opportunity to articulate what variability is to a much greater extent, including how much variability is leading to uncertainty. Another issue is the level of confidence in the methodology. If there is strong confidence that the methodology is not highly variable and is reproducible, and that the cellular-level outcomes are relevant to humans, NAMs could help researchers shift away from animal studies. Fenton noted that integrating the results from different approaches (shorter-term animal studies, human studies, and one or more NAMs) would enhance the representation and understanding of variability and increase the confidence and policy application of NAMs. MacLeod noted that one approach is to conduct experiments with different NAMs across various laboratories to examine when heterogeneity is saturated (by calculating the tau squared). From this effort, one can identify a core set of data from a collection of NAMs for further evaluation.

EXAMINING THE CONCORDANCE OF TRADITIONAL MAMMALIAN TOXICITY STUDIES WITH HUMANS

Patience Browne, Organisation for Economic Co-operation and Development, and Nancy Lane, University of California, Davis, moderated a session on understanding the concordance of traditional mammalian toxicity studies with humans.

Thomas Hartung, Johns Hopkins University, discussed the concordance of traditional mammalian toxicity studies with clinical human outcomes. He began by describing studies of pharmaceuticals, which are considered to be the best toxicological assessments. Yet, many fail; in fact, as described in his presentation, 18 the vast majority will fail at the clinical trials phase of testing. This poses a significant challenge for drug development, both in terms of the time and also the cost.

18 See https://www.nationalacademies.org/event/12-09-2021/new-approach-methods-nams-for-human-health-risk-assessmentworkshop-1 .

Hartung discussed the comparison of human versus animal bioavailability of drugs in studies; the results vary significantly and do not seem to correlate with species, highlighting a key challenge in conducting quantitative risk assessments. Regarding the validity of animal tests, Hartung discussed how difficult it has been for researchers to face criticism about these long-standing methods, limiting the incentive to question and enhance current tools and approaches. The validation of these tests is also expensive and time consuming.

Hartung discussed the process for the formal validation of tests, which can provide regulators the evidence they need to determine if they can trust a new or alternative method. He and other researchers developed a database of more than 10,000 chemicals and through a search of the literature also identified 800,000 related studies of these chemicals. The results indicated that the nine most frequent toxicity tests consume 57% of animals in toxicology. 19 In an examination of studies of chemicals considered reproductive toxicants there was 60% inter-species correlation. Hartung said that he strongly feels that “by acknowledging and quantifying the limitation of animal tests, we can open up the door for new methods.” Hartung noted, for example, a review of studies of side effects in clinical trials conducted by the pharmaceutical industry indicated that rodent studies alone predict about only about 43% of human side effects. 20

Joshua Robinson, University of California, San Francisco, presented on the concordance between animal and human toxicology. Studies of concordance provide a valuable system to study many aspects of human development. In fact, he noted that there is high similarity between rat and human embryos during early organogenesis, whereby the morphological and underlying molecular changes are conserved across the neurulation/early organogenesis period.

Robinson also discussed the challenges in examining animal toxicity studies, including the need to carefully consider the choice of animal model, differences in sensitivity and metabolism among species, and selection of the appropriate exposure route and vehicle. Other considerations include whether the chemical target is present within the test species, high to low dose extrapolation approaches, and uncertainty factors. Regarding studies in humans, while there are many advantages, particularly relevance, there are challenges, including difficulty addressing cause and effect, assessing genetically diverse populations, ethical challenges, the need to interpret low dose effects, and the cost of these studies.

As discussed in Olson et al. (2000), 21 while there are many human toxicants with concordance between animal and human data, this is dependent on the species tested. Also, not all endpoints are equally concordant, as this can vary according to the human target organ. The best concordance identified by Olson et al. (2000) 22 was for hematological, gastrointestinal, and cardiovascular endpoints, while musculoskeletal, respiratory, and other endpoints were found to have poor concordance. 23 Environmental chemicals usually require several studies to determine concordance and causation, which can take years to evaluate.

While there are numerous examples of known toxic agents in animals and humans, direct comparisons are difficult and take years to establish causal evidence, Robinson added. Studies examining concordance between animal and human toxicity are lacking. However, the few studies that have been conducted suggest positive concordance (around 70%). Questions remain regarding overall predictive value, species selection, and appropriate endpoints.

Dorman presented the findings from the 2017 National Academies report Application of Systematic Review Methods in an Overall Strategy for Evaluating Low-Dose Toxicity from Endocrine Active Chemicals . 24 The committee responsible for the report was tasked with developing a strategy to evaluate the evidence of adverse human health effects from low doses of exposure to chemicals that can disrupt the endocrine system. It was also tasked with conducting systematic reviews of animal and human toxicology data for phthalates and polybrominated diphenyl ethers (PBDEs).

Dorman provided a brief overview of the committee’s examination of phthalates, which are present in a wide range of consumer products and ubiquitous in the environment. The committee focused on male reproductive effects related to phthalates based on in utero exposure, including changes in anogenital distance (AGD), incidence of hypospadias, and lower fetal testosterone concentrations. A systematic review examined in utero exposure and included 13 human studies and 70 animal studies. In its examination of animal studies of di(2-ethyhexyl) phthalate (DEHP) exposure, the committee was able to identify an association between exposure in utero and changes in AGD, but noted discordant results based on the types of rats used in the study.

19 Smirnova, L., N. Kleinstreuer, R. Corvi, A. Levchenko, S. C. Fitzpatrick, and T. Hartung. 2018. 3S - Systematic, systemic, and systems biology and toxicology. ALTEX: Alternatives to Animal Experimentation 35(2):139–162. doi: 10.14573/altex.1804051.

20 Olson, H., G. Betton, D. Robinson, K. Thomas, A. Monro, G. Kolaja, P. Lilly, J. Sanders, G. Sipes, W. Bracken, M. Dorato, K. Van Deun, P. Smith, B. Berger, and A. Heller. 2000. Concordance of the toxicity of pharmaceuticals in humans and in animals. Regulatory Toxicology and Pharmacology 32(1):56–67.

23 Tamaki, C., T. Nagayama, M. Hashiba, M. Fujiyoshi, M. Hizue, H. Kodaira, M. Nishida, K. Suzuki, Y. Takashima, Y. Ogino, D. Yasugi, Y. Yoneta, S. Hisada, T. Ohkura, and K. Nakamura. 2013. Potentials and limitations of nonclinical safety assessment for predicting clinical adverse drug reactions: Correlation analysis of 142 approved drugs in Japan. The Journal of Toxicological Sciences 38:581–598.

24 NASEM. 2017. Application of systematic review methods in an overall strategy for evaluating low-dose toxicity from endocrine active chemicals . Washington, DC: The National Academies Press. https://www.nap.edu/catalog/24758 .

A wide range of endpoints for DEHP were considered. The committee had moderate to high confidence in the studies examining AGD, enabling it to make a final hazard call from this endpoint. For the other outcomes, incidence of hypospadias, and lower fetal testosterone, the human data were found to be inadequate to draw conclusions. Regarding concordance between phthalates and AGD, Dorman noted that current testing methods can identify a hazard that is presumed to be of concern to humans but might not be able to accurately predict exposures at which humans are affected.

The 2017 report committee’s evaluation of PBDEs focused on outcomes related to changes in spontaneous motor activity and impaired performance in learning and memory tests in rodents and reduced IQ in children. A systematic review of the literature examined nonhuman mammals and humans. Across studies in rats and mice, the committee noted a significant relationship between PBDE exposure and changes in the latency to complete the last trial of a Morris water maze. Human studies provide a moderate level of evidence that PBDEs are associated with decrements in human IQ, but there was limited evidence to support an association between PBDEs and attention-deficit/hyperactivity disorder (ADHD) in children.

Regarding concordance, Dorman noted that the integration of human data that evaluated measures of IQ and ADHD with animal studies that examined learning, memory, and attention was challenging given the varying endpoints. Additionally, the animal studies used different tests of learning and memory. The test methods and data analyses also often differed between studies and exposures were lower in the human studies. Dorman noted that ultimately the committee found that current mammalian-based testing paradigms could detect a hazard (change in learning and memory) that is presumed to be a concern in humans; however, a comparison of doses between the animal and human studies was challenging and imprecise.

Lane and Browne moderated a panel discussion following the presentations. Browne asked the panelists to discuss the challenges and limitations of the use of animal models. Hartung highlighted several challenges, including the fact that the animals used in these studies are often healthy, pathogen free, and are thus not representative of the human patient population. Also, animal studies are often small, with 10–15 animals per study, which is another concern.

Dorman added that the heterogeneity among mammals is another issue, along with genetics and the complexity of diets fed to animals under study. From a regulatory perspective, there is a focus on using uncertainty factors to address variability. While this approach has been useful, as researchers learn more about variability, there is a need to revisit how they consider uncertainty. Another challenge Dorman noted is the disconnect between regulatory studies that follow stringent guidelines, with pre-prescribed outcomes compared to studies in academic labs where outcomes can vary significantly. Reproducing these studies in different labs is a challenge.

Browne asked the panelists to discuss differences that arise between health effects observed in animal versus human studies. There are differences that have been observed, Fenton said, citing the example of per- and polyfluoroalkyl substances (PFAS), where cholesterol and thyroid hormone outcomes associated with exposure differ between humans and animals. In fact, the observed effects may go in opposite directions in humans versus animals. Fenton noted that these differences may not necessarily reflect species differences in toxic response, rather, they may highlight that a sensitive pathway is being affected.

Several endpoints are not often observed in rodents, Dorman said, for example, more subtle neurological endpoints. However, it is important to consider how to explain that concordance in terms of model exposure and its difference from humans. Robinson also echoed these challenges, noting that there are studies looking at rodent versus human stem cells that show drastic differences in terms of sensitivity on a cellular level. If researchers consider these different factors and look at the whole animal, screen the species, and compare with humans, they are going to have difficulty in making comparisons, he added.

Dorman noted that toxicologic studies were historically designed with the goal of preventing catastrophic events and have demonstrated some success. However, their use has become more challenging as researchers develop a more nuanced set of outcomes. He posed the question, are NAMs to be used to prevent a catastrophe or examine a nuanced outcome? Robinson and others discussed the opportunities to incorporate NAMs into technologies that can examine global molecular changes. Dorman and Fenton noted that NAMs could be helpful in improving predictions, especially in the context of analyses across and within large datasets.

Lane asked the panelists to reflect on what biologic factors would be helpful to consider in the qualitative or quantitative extrapolation of results in rodents and animal models. Fenton responded that NAMs could support the analyses of specific chemicals within a larger group of chemicals (e.g., PFAS), informing health effects and risk assessments going forward. Dorman added that the current committee will likely consider the decision context with which NAMs is being used; is it to be used to replace or augment mammalian data? The decision context will drive the answer to that question. The panelists suggested that there is also a need to consider new and better ways to analyze data from experiments; for example, much has been learned in the past three decades about the extent to which variance can impact different endpoints.

FINAL PANEL REFLECTIONS

Kristi Pullen Fedinick, Natural Resources Defense Council, and Corie Ellison, The Procter & Gamble Company, moderated a closing panel discussion focusing on the key reflections from the workshop.

Strengths and Limitations of Traditional Animal Studies

The panelists discussed the strengths and limitations for using animal studies as a gold standard for toxicity testing. Helen Goeden, Minnesota Department of Health, noted that animal data are essential for her work; if the animal data are not adequate to support the derivation of risk-based criteria, it is not possible to take regulatory action. She added that considering multiple durations of exposure is important as historically, there has been a focus on chronic exposure.

Reza Rasoulpour, Corteva, added that the animal models offer strengths for discovering adverse effects but his research has identified positive outcomes in animal species for a metabolite that would never be found in humans. This highlights the importance of toxicokinetic information, a need also supported by Munn. Goeden added that toxicokinetics can change over life stage and by gender and race, a significant gap in the current knowledge.

Utilizing NAMs for Human Health Risk Assessment

The panelists also discussed the role NAMs can play in strengthening risk assessment and identified the key challenges to advancing these approaches. For example, Goeden noted that there is a dearth of data for many chemicals and the hope is to leverage NAMs to address this gap, including around toxicokinetic data. Munn supported this point noting that animal data cannot be collected on all chemicals of interest so there is a need to exploit NAMs to this end. Rashmi Joglekar, Earth Justice, added that NAMs will be a critical tool in prioritizing chemicals, moving chemicals up higher on the list to be evaluated for risk later under other statutes. Goeden agreed that while NAMs will help with prioritization, a quantitative assessment is often still needed and has typically relied on animal data.

Rasoulpour discussed the use of animal studies and NAMs in novel product development, noting that his company has a predictive safety center that uses NAMs to support data on metabolomic endpoints, toxicogenomic endpoints, and toxicokinetics to identify biological points of departure. The center is also able to conduct a full battery of regulatory assays. He has observed through his work that there is significant value in being able to share data used in regulatory decisions more broadly. If NAMs are used to inform future decision-making, analyses using big data techniques would be a key focus.

The panelists also discussed concerns with the use of NAMs. Goeden noted that NAMs may not be able to assess sensitive endpoints. There are also concerns about the variability and instability of cell lines over time, which may affect the comparability of results. Issues regarding model tissue sources and cultures are considerations to be included along with limitations for evaluating certain types of chemicals and particular outcomes (e.g., those related to the thyroid and neurological effects). Goeden added that there is a need for guidance about how to interpret the results of NAMs and how to apply that information. Joglekar noted that animal studies play a critical role in examining complex health outcomes and it will be difficult to replace this with NAMs. The current NAMs tests in the neurotoxicity space can be used to complement current animal tests, but not replace gold standard testing protocols or to establish the safety of chemicals. However, a challenge as researchers incorporate NAMs is to determine if they are defensible and actionable. This may be achieved through demonstration studies. Rasoulpour noted the need to consider differing levels of sensitivity, understanding, and confidence in some of the new science used to support decisions in countries around the world. Bringing different types of datasets into these decisions can be challenging and will benefit from harmonization from the scientific community.

Understanding Susceptibility and Protecting Underserved and Disadvantaged Populations

Ellison asked the panelists whether current toxicity testing and biomonitoring studies are falling short in protecting communities of color and underserved and disadvantaged populations. The panelists noted that the cumulative impacts of mixtures is a key concern in understanding the risks in communities of color; however, little progress has been made in this area with respect to toxicity testing. Munn and others agreed that NAMs may be useful in strengthening the approaches toward the evaluation of multiple chemicals. Joglekar noted the shortcomings of current animal models in protecting susceptible populations and supported the need to broaden testing to examine mixtures of chemicals and address non-chemical stressors. There are opportunities to determine the relative contributions for each of these stressors for adverse outcomes. Addressing human variability is another critical aspect highlighted by Joglekar and others, which cannot be addressed through cell lines derived from single individuals. Rasoulpour added that studies of exposures that affect underserved and disadvantaged populations are complicated by the environmental and socioeconomic stressors faced by these communities. Understanding this complexity is critical to inform interventions, he said. Open science, transparency, and big data can be utilized to make progress on this issue. Dorman noted that toxicology

researchers currently do not design studies with biomonitoring in mind, a critical gap that can limit the applicability of the findings in real-world settings. Fedinick also queried the panelists on the role of uncertainty factors in the discussions around protecting susceptible populations. One opportunity noted by the panelists is to shift risk assessment to probabilistic methods to quantify risk.

Following the final panel discussion, the public offered its comments to the committee and workshop panelists. These comments are not summarized here, but a recording of this public comment session can be found online. 25

25 See https://www.nationalacademies.org/event/12-09-2021/new-approach-methods-nams-for-human-health-risk-assessment-workshop-1 .

DISCLAIMER: This Proceedings of a Workshop—in Brief was prepared by Jennifer Saunders as a factual summary of what occurred at the workshop. The statements made are those of the rapporteur or individual workshop participants and do not necessarily represent the views of all workshop participants; the planning committee; the workshop participants’ institutions; or the National Academies of Sciences, Engineering, and Medicine.

COMMITTEE ON VARIABILITY AND RELEVANCE OF CURRENT LABORATORY MAMMALIAN TOXICITY TESTS AND EXPECTATIONS FOR NEW APPROACH METHODS (NAMs) FOR USE IN HUMAN HEALTH RISK ASSESSMENT

WEIHSUEH A. CHIU ( Chair ), Professor, Department of Veterinary Integrative Biosciences, Texas A&M University; KIM BOEKELHEIDE, Professor (Research) and Professor (Emeritus), Department of Pathology and Laboratory Medicine, Brown University School of Medicine; PATIENCE BROWNE, Hazard Assessment and Pesticide Programmes, Environmental, Health, and Safety Division, Organisation for Economic Co-operation and Development; HOLLY DAVIES, Senior Toxicologist, Washington State Department of Health; CORIE A. ELLISON, Group Scientist, The Procter & Gamble Company; MARIE C. FORTIN, Associate Director of Toxicology, Jazz Pharmaceuticals; NICOLE C. KLEINSTREUER, Acting Director, NTP Interagency Center for the Evaluation of Alternative Toxicological Methods; NANCY E. LANE, Endowed Professor of Medicine, Rheumatology, and Aging Research, Director for the Center for Musculoskeletal Health, University of California, Davis; HEATHER B. PATISAUL, Associate Dean for Research, College of Sciences, North Carolina State University; ELIJAH J. PETERSEN, Staff Scientist, National Institute of Standards and Technology; KRISTI PULLEN FEDINICK, Chief Science Officer, Natural Resources Defense Council; MARTYN T. SMITH, Professor of Toxicology, Kaiser Professor of Cancer Epidemiology, School of Public Health, University of California, Berkeley; ROBYN L. TANGUAY, University Distinguished Professor, Oregon State University; CHRISTOPHER VULPE, Professor, University of Florida, Gainesville; TRACEY J. WOODRUFF, Alison S. Carlson Endowed Professor, Department of Obstetrics, Gynecology, and Reproductive Sciences, University of California, San Francisco; and JOSEPH C. WU, Director, Stanford Cardiovascular Institute, Simon H. Stertzer, MD, Professor of Medicine and Radiology, Stanford University.

STAFF: KATHRYN GUYTON, Study Director, Board on Environmental Studies and Toxicology (BEST); CORRINE LUTZ, Senior Program Officer, Institute for Laboratory Animal Research; and TAMARA N. DAWSON, Program Coordinator, BEST.

REVIEWERS: To ensure that it meets institutional standards for quality and objectivity, this Proceedings of a Workshop—in Brief was reviewed by Vincent Cogliano, California Environmental Protection Agency, and Suzanne Fenton, National Toxicology Program. We also thank staff members David Butler and Jennifer Cohen for reading and providing helpful comments on this manuscript.

SPONSORS: This workshop was supported by the U.S. Environmental Protection Agency.

Suggested citation: National Academies of Sciences, Engineering, and Medicine. 2022. New Approach Methods (NAMs) for Human Health Risk Assessment: Proceedings of a Workshop—in Brief . Washington, DC: The National Academies Press. https://doi.org/10.17226/26496 .

Division on Earth and Life Studies

Image

Copyright 2022 by the National Academy of Sciences. All rights reserved.

Animal testing is often used to assess the potential risks, uses, and environmental impacts of chemicals. New Approach Methods (NAMs) are technologies and approaches (including computational modeling, in vitro assays, and testing using alternative animal species) that can inform hazard and risk assessment decisions without the use of animal testing.

The National Academies of Sciences, Engineering and Medicine convened a 1-day virtual public workshop on December 9, 2021, to address the potential utility and expectations for the future use of NAMs in risk assessment and to reflect on the challenges to their implementation. The workshop focused on how traditional toxicity studies are used in informing chemical safety decisions and variability and concordance of traditional mammalian toxicity studies. This publication summarizes the presentation and discussion of the workshop.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

JavaScript appears to be disabled on this computer. Please click here to see any active alerts .

New Approach Methods Work Plan

EPA's New Approach Methods (NAMs) Work Plan was created to prioritize agency efforts and resources toward activities that will reduce the use of vertebrate animal testing while continuing to protect human health and the environment. The first Work Plan was released in June 2020 and an updated Work Plan was released in December 2021. 

  • EPA NAM Work Plan December 2021 (pdf) (1.7 MB, 12/02/2021, 600/X-21/209)
  • Archived EPA NAM Work Plan Released June 2020 (pdf) (576.4 KB, 6/23/2020, 615B20001)

These Work Plans describe EPA's efforts to reduce vertebrate animal testing using New Approach Methods.

EPA New Approach Methods: Efforts to Reduce Use of Vertebrate Animals in Chemical Testing

EPA New Approach Methods Work Plan: Reducing Use of Vertebrate Animals in Chemical Testing

Alternative Test Methods and Strategies to Reduce Vertebrate Animal Testing

  • Chemical Safety Research Home
  • Chemical Evaluation & Characterization
  • Complex Systems Science
  • Translation, Training, & Tools
  • New Approach Methodologies Research
  • Chemical Research to Inform Decision Making
  • Collaborative Agreements

New approach methodologies (NAMs) for human-relevant biokinetics predictions. Meeting the paradigm shift in toxicology towards an animal-free chemical risk assessment

Affiliations.

  • 1 WFSR Wageningen Food Safety Research, Wageningen, The Netherlands.
  • 2 Division of Toxicology, Wageningen University and Research, Wageningen, The Netherlands.
  • 3 Institute for Risk Assessment Sciences, Utrecht University, Utrecht, The Netherlands.
  • 4 European Commission Joint Research Centre, Ispra, Italy.
  • 5 RIVM - The National Institute for Public Health and the Environment, Bilthoven, The Netherlands.
  • 6 Section of Pharmacogenetics, Department of Physiology and Pharmacology, Karolinska Institutet, Stockholm, Sweden.
  • 7 Division of Molecular and Computational Toxicology, Dept. Of Chemistry and Pharmaceutical Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
  • 8 Certara UK Ltd, Simcyp Division, Sheffield, UK.
  • 9 Division of Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, Utrecht, The Netherlands.
  • 10 Shell Health, Shell International B.V., The Hague, The Netherlands.
  • 11 Biomedical Engineering, Cornell University, Department of Biomedical Engineering, Ithaca, NY, USA.
  • 12 Unilever, Colworth Science Park, Bedfordshire, UK.
  • 13 Department of Clinical Sciences of Companion Animals, Faculty of Veterinary Medicine, Utrecht University, The Netherlands.
  • 14 Van Hall Larenstein University of Applied Sciences, Leeuwarden, The Netherlands.
  • 15 University of Twente, Department of Applied Stem Cell Technologies, Enschede, The Netherlands.
  • 16 Department of In vitro Toxicology and Dermato-Cosmetology, Vrije Universiteit Brussel, Brussels, Belgium.
  • 17 Division of Drug Discovery and Safety, Leiden Academic Centre for Drug Research (LACDR)/Leiden University, Leiden, The Netherlands.
  • PMID: 32521035
  • DOI: 10.14573/altex.2003242

For almost fifteen years, the availability and regulatory acceptance of new approach methodologies (NAMs) to assess the absorption, distribution, metabolism and excretion (ADME/biokinetics) in chemical risk evaluations are a bottleneck. To enhance the field, a team of 24 experts from science, industry, and regulatory bodies, including new generation toxicologists, met at the Lorentz Centre in Leiden, The Netherlands. A range of possibilities for the use of NAMs for biokinetics in risk evaluations were formulated (for example to define species differences and human variation or to perform quantitative in vitro-in vivo extrapolations). To increase the regulatory use and acceptance of NAMs for biokinetics for these ADME considerations within risk evaluations, the development of test guidelines (protocols) and of overarching guidance documents is considered a critical step. To this end, a need for an expert group on biokinetics within the Organisation of Economic Cooperation and Development (OECD) to supervise this process was formulated. The workshop discussions revealed that method development is still required, particularly to adequately capture transporter mediated processes as well as to obtain cell models that reflect the physiology and kinetic characteristics of relevant organs. Developments in the fields of stem cells, organoids and organ-on-a-chip models provide promising tools to meet these research needs in the future.

Keywords: PB(P)K; QIVIVE; biokinetics; in silico; in vitro; next-generation risk evaluations.

Publication types

  • Consensus Development Conference
  • Animal Testing Alternatives / methods*
  • Animal Testing Alternatives / standards*
  • Hazardous Substances / pharmacokinetics*
  • Hazardous Substances / toxicity*
  • Risk Assessment
  • Toxicology / methods
  • Toxicology / standards
  • Hazardous Substances
  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

54 Citations

58 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

new methodology approach

New Materialist Methods and the Research Process

  • First Online: 18 December 2020

Cite this chapter

new methodology approach

  • Holly Thorpe 7 ,
  • Julie Brice 7 &
  • Marianne Clark 8  

Part of the book series: New Femininities in Digital, Physical and Sporting Cultures ((NFDPSC))

911 Accesses

1 Citations

In this chapter we discuss the challenges, opportunities, and considerations of putting new materialist theory into practice in empirical research. In this chapter we engage with literature from across a range of fields to provide an overview of the many ways that new materialisms are informing the research process and methodology, methods, and researcher positionality, reflexivity, and ethics. We begin by elaborating on how the onto-epistemology of new materialisms encourages alternative approaches to the research process. Following this, we map out some of the diverse ways that scholars are reengaging and reimagining research methods, including media analysis, interviews, participatory methods, autoethnography, arts-related research practices, embodied and movement-based methods, and transdisciplinarity and mixed methods. We conclude this chapter with a discussion of how new materialist theory encourages new questions and considerations of ethics, reflexivity, and the politics of knowledge production.

https://www.youtube.com/watch?v=yM-Q4FmW6h8&list=PLdbxSLlj0ri04cOHxK37TfaQg0IAv6Znf&index=2

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Ackerly, B., & True, J. (2008). Reflexivity in practice: Power and ethics in feminist research on international relations. International Studies Review, 10 (4), 693–707.

Article   Google Scholar  

Ahmed, S. (2008). Open forum. Imaginary prohibitions. Some preliminary remarks on the founding gesture of the ‘new materialism. European Journal of Women’s Studies, 15 (1), 23–39.

Alaimo, S. (2008). Transcorporeal feminisms and the ethical space of nature. In S. Alaimo & S. Hekman (Eds.), Material feminisms (pp. 237–264). Bloomington, IN: Indiana University Press.

Google Scholar  

Alldred, P., & Fox, N. J. (2015). The sexuality- assemblages of young men: A new materialist analysis. Sexualities, 18 (8), 905–920.

Allen, L. (2018). Sexual choreographies of the classroom: Movement in sexuality education. Discourse: Studies in the Cultural Politics of Education, 39 (3), 347–360.

Allen-Collinson, J., & Leledaki, A. (2015). Sensing the outdoors: a visual and haptic phenomenology of outdoor exercise embodiment. Leisure Studies, 34(4), 457–470.

Åsberg, C. (2013). The timely ethics of posthuman gender studies. Feministische Studien, 31 (1), 7–12.

Back, L., & Puwar, N. (Eds.). (2012). Live methods . Malden, MA: Wiley-Blackwell.

Barad, K. (2003). Posthumanist performativity: Toward an understanding of how matter comes to matter. Signs, 28 (3), 801–831.

Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning . Durham, NC: Duke University Press.

Barad, K. (2010). Quantum entanglements and hauntological relations of inheritance: Dis/continuities, spacetime infoldings, and justice-to-come. Derrida Today, 3 (2), 240–268.

Barad, K. (2014). Diffracting diffraction: Cutting together-apart. Parallax, 20 (3), 168–187.

Barrett, E., & Bolt, B. (Eds.). (2013). Carnal knowledge: Towards a ‘new materialism’ through the arts . New York, NY: I.B. Tauris.

Bastian, M., Jones, O., Moore, N., & Roe, E. (Eds.). (2017). Participatory research in more-than-human worlds . New York, NY: Routledge.

Beausoleil, E. (2015). Embodying an ethics of response-ability. Borderlands, 14 (2), 1–16.

Bozalek, V., & Zembylas, M. (2017). Diffraction or reflection? Sketching the contours of two methodologies in educational research. International Journal of Qualitative Studies in Education, 30 (2), 111–127.

Braidotti, R. (2013). The posthuman . Cambridge, UK: Polity Press.

Braidotti, R. (2019a). A theoretical framework for the critical posthumanities. Theory, Culture & Society, 36 (6), 31–61.

Brice, J. E., Clark, M., & Thorpe, H. (2020). Feminist collaborative becomings: An entangled process of knowing through fitness objects. Qualitative Research in Sport, Exercise and Health. https://doi.org/10.1080/2159676X.2020.1820560

Buller, H. (2015). Animal geographies II: Methods. Progress in Human Geography, 39 (3), 374–384.

Büscher, M., Urry, J., & Witchger, K. (Eds.). (2010). Mobile methods . Abingdon, Oxon: Routledge.

Cardinal, A. (2019). Participatory video: An apparatus for ethnically researching literacy, power and embodiment. Computers and Composition, 53 , 34–46.

Clark, M. (2020). Re-imagining the dancing body with and through Barad. In J. Newman, H. Thorpe, & D. Andrews (Eds.), Sport, physical culture and the moving body: Materialisms, technologies, and ecologies (pp. 209–228). New Brunswick, NJ: Rutgers University Press.

Clark, M., & Thorpe, H. (2020). Towards diffractive ways of knowing women’s moving bodies: A Baradian experiment with the fitbit/motherhood entanglement. Sociology of Sport, 37 (1), 12–26.

Coleman, R., & Osgood, J. (2019). PhEMaterialist encounters with glitter: The materialisation of ethics, politics and care in arts-based research. Reconceptualizing Educational Research Methodology, 10 (2–3), 61–86.

Coleman, R., Page, T., & Palmer, H. (2019). Feminist new materialist practice: The mattering of methods. Mai: Feminism and Visual Culture . https://maifeminism.com/feminist-new-materialisms-the-mattering-of-methods-editors-note/

Coleman, R., & Ringrose, J. (2013). Introduction: Deleuze and research methodologies. In R. Coleman & J. Ringrose (Eds.), Deleuze and research methodologies (pp. 1–22). Edinburgh, Scotland: Edinburgh University Press.

Connolly, W. E. (2013). The ‘new materialism’ and the fragility of things. Millennium, 41 (3), 399–412.

Coole, D., & Frost, S. (2010b). Introducing the new materialisms. In D. Coole & S. Frost (Eds.), New materialisms: Ontology, agency and politics (pp. 1–42). Durham, NC: Duke University Press.

Daley, A. (2010). Reflections on reflexivity and critical reflection as critical research practices. Affilia, 25 (1), 68–82.

Davies, B. (2014). Reading anger in early childhood intra-actions: A diffractive analysis. Qualitative Inquiry, 20 (6), 734–741.

de Freitas, E. (2016). Calculating matter and recombinant subjects: The infinitesimal and the fractal fold. Cultural Studies↔Critical Methodologies, 16 , 462–470.

de Freitas, E. (2017). Karen Barad’s quantum ontology and posthuman ethics: Rethinking the concept of relationality. Qualitative Inquiry, 23 (9), 741–748.

Deleuze, G., & Guattari, F. (1987). A thousand plateaus: Capitalism and schizophrenia . Minneapolis, MN: University of Minnesota Press.

Dickinson, S. (2017). Writing sensation: Critical autoethnography in posthumanism. In J. Holman & M. Pruyn (Eds.), Creative selves/Creative cultures (pp. 77–92). Cham, Switzerland: Palgrave Macmillan.

Dolphijn, R., & van der Tuin, I. (2012). New materialism: Interviews and cartographies . Ann Arbor, MI: Open Humanities Press.

Book   Google Scholar  

Ellingson, L. (2017). Embodiment in qualitative research . New York, NY: Routledge.

Flynn, N. (2016). Performativity and metaphor in new materialist media theory. Networking Knowledge: Journal of the MeCCSA Postgraduate Network, 9 (1).

Fox, N., & Alldred, P. (2015). New materialist social inquiry: Designs, methods and the research-assemblage. International Journal of Social Research Methodology, 18 (4), 399–414.

Fox, N., & Alldred, P. (2016a). Sociology and the new materialism: Theory, research, action . London, UK: SAGE.

Fox, N., & Alldred, P. (2018b). Mixed methods, materialism and the micropolitics of the research-assemblage. International Journal of Social Research Methodology, 21 (2), 194–202.

Francombe-Webb, J. (2017). Methods that move: Exer-gaming and embodied experiences of femininity. In M. Giardina & M. Donnelly (Eds.), Physical culture, ethnography and the body (pp. 183–196). London, UK: Routledge.

Fullagar, S. (2017). Post-qualitative inquiry and the new materialist turn: Implications for sport, health and physical culture research. Qualitative Research in Sport, Exercise and Health, 9 (2), 247–257.

Fullagar, S. (2020). Diffracting mind-body relations: Feminist materialism and the entanglement of physical culture in women’s recovery from depression. In J. Newman, H. Thorpe, & D. Andrews (Eds.), Sport, physical culture and the moving body: Materialisms, technologies, and ecologies (pp. 170–192). New Brunswick, NJ: Rutgers University Press.

Fullagar, S. (2021). Re-turning to embodied matters and movement through postqualitative inquiries. In K. Murris (Ed.), Navigating the postqualitative, new materialist and critical posthumanist terrain across disciplines: An introductory guide. Abingdon, UK: Routledge.

Fullagar, S., Pavlidis, A., & Stadler, R. (2017). Collaborative writing as rhizomatic practice: Critical moments of (un)doing doctoral supervision. Knowledge Cultures, 5 (4), 23–41.

Gauntlett, D. (2011). Making is connecting: The social meaning of creativity, from DIY and knitting to YouTube and Web 2.0 . Cambridge, UK: Polity Press.giardina.

Geerts, E., & van der Tuin, I. (2016). The feminist future or reading diffractively: How Barad’s methodology replaces conflict-based readings of Beauvoir and Irigaray. Rhizomes: Cultural Studies in Emerging Knowledge , (30).

Giardina, M. D. (2017). (Post?)qualitative inquiry in sport, exercise, and health: Notes on a methodologically contested present. Qualitative Research in Sport, Exercise and Health, 9 (2), 258–270.

Giardina, M. D., & Newman, J. (2011). What is the ‘physical’ in physical cultural studies? Sociology of Sport Journal, 28 (1), 36–63.

Gildersleeve, R. (2017). The neoliberal academy of the anthropocene and the retaliation of the lazy academic. Cultural Studies ↔ Critical Methodologies, 17 (3), 286–293.

Guttorm, H., Hohti, R., & Paakaari, A. (2015). “Do the next thing”: An interview with Elizabeth Adams St. Pierre on post-qualitative methodology. Reconceptualizing Education Research Methodology, 6 (1).

Hammoor, C. (2019). Playful pedagogy: Autoethnography in the anthropocene. In C. Taylor & A. Bayley (Eds.), Posthumanism and higher education (pp. 281–291). Cham, Switzerland: Palgrave Macmillan.

Chapter   Google Scholar  

Handforth, R., & Taylor, C. A. (2016). Doing academic writing differently: A feminist bricolage. Gender and Education, 28 , 627–643.

Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspectives. Feminist Studies, 14 (3), 575–599.

Haraway, D. (1992). Otherworldly conversations; Terrain topics; local terms. Science as Culture, 3 (1), 64–98.

Haraway, D. (1997). Modest_Witness@Second_Millennium.FemaleMan_Meets_OncoMouse: Feminism and Technoscience . New York, NY: Routledge.

Harding, S. (1986). The science question in feminism . Ithaca, NY: Cornell University Press.

Harding, S. (Ed.). (1987). Feminism and methodology: Social science issues . Bloomington, IN: Indiana University Press.

Harding, S. (1996). Standpoint epistemology (a feminist version): How social disadvantage creates epistemic advantage. In S. Turner (Ed.), Social theory and sociology: The classics and beyond (pp. 146–160). Oxford, UK: Blackwell.

Harding, S. (2006). Science and social inequality: Feminist and postcolonial issues . Chicago, IL: University of Illinois Press.

Heddon, D. (2017). Con-versing: Listening, speaking, turning. In M. Bastian, O. Jones, N. Moore, & E. Roe (Eds.), Participatory research in more-than-human worlds (pp. 192–208). New York, NY: Routledge.

Hemmings, C. (2011). Why stories matter: The political grammar of feminist theory . Durham, NC/London, UK: Duke University Press.

Hickey-Moody, A., & Page, T. (2015). Arts, pedagogy and cultural resistance: New materialisms . London, UK: Rowman & Littlefield International.

Hickey-Moody, A., Palmer, H., & Sayers, E. (2016). Diffractive pedagogies: Dancing across new materialist imaginaries. Gender and Education, 28 (2), 213–229.

Hodgetts, T., & Lorimer, J. (2015). Methodologies for animals’ geographies: Cultures, communication and genomics. Cultural Geographies, 22 (2), 285–295.

Hughes, C., & Lury, C. (2013). Re-turning feminist methodologies: From a social to an ecological epistemology. Gender and Education, 25 (6), 786–799.

Hultman, K., & Lenz-Taguchi, H. (2010). Challenging anthropocentric analysis of visual data: A relational materialist methodological approach to educational research. International Journal of Qualitative Studies in Education, 23 (5), 525–542.

Humberstone, B. (2011). Embodiment and social action in nature-based sport: Spiritual spaces. Leisure Studies, 30 (4), 495–512.

Irni, S. (2010). Ageing apparatuses at work: Transdisciplinary negotiations of sex, age and materiality . Åbo, Finland: Åbo Akademi University Press.

Irni, S. (2013a). Sex, power and ontology: Exploring the performativity of hormones. Nordic Journal of Feminist and Gender Research, 21 (1), 41–56.

Irni, S. (2013b). The politics of materiality: Affective encounters in a transdisciplinary debate. European Journal of Women’s Studies, 20 (4), 347–360.

Jackson, A., & Mazzei, L. (2013). Plugging one text into another: Thinking with theory in qualitative research. Qualitative Inquiry, 19 (4), 261–271.

Jeffrey, A. (2020). Women’s contemporary yoga lifestyles: An embodied ethnography of becoming. PhD thesis. University of Waikato. Available from: https://researchcommons.waikato.ac.nz/handle/10289/13578

Jeffrey, A., Barbour, K., & Thorpe, H. (Under Review). Entangled yoga bodies.

Jette, S., Esmonde, K., Andrews, D., & Pluim, C. (2020). Big bodies, big data: Unpacking the FitnessGram Black Box. In J. Newman, H. Thorpe, & D. Andrews (Eds.), Sport, physical culture and the moving body: Materialisms, technologies, and ecologies (pp. 131–150). New Brunswick, NJ: Rutgers University Press.

King, S. (2020). Towards a multispecies sport studies. In J. Newman, H. Thorpe, & D. Andrews (Eds.), Sport, physical culture and the moving body: Materialisms, technologies, and ecologies (pp. 193–208). New Brunswick, NJ: Rutgers University Press.

Koro-Ljungberg, M. (2015). Reconceptualizing qualitative research: Methodologies without methodology . Los Angeles, CA: SAGE.

Koro-Ljungberg, M., Löytönen, T., & Tesar, M. (2017). Disrupting data in qualitative inquiry: Entanglements with the post-critical and post-anthropocentric (post-anthropocentric inquiry) . New York, NY: Peter Lang Publishing Inc.

Koro-Ljungberg, M., & MacLure, M. (2013). Provocations, re-un-visions, death, and other possibilities of ‘data. Cultural Studies ↔ Critical Methodologies, 13 , 219–222.

Koro-Ljungberg, M., Yendol-Hoppey, D., Smith, J., & Hayes, S. (2009). (E)pistemological awareness, instantiation of methods, and uninformed methodological ambiguity in qualitative research projects. Educational Researcher, 38 , 687–699.

Lather, P. (2007). Getting lost: Feminist practices toward a double(d) science . Albany, NY: SUNY Press.

Lather, P. (2016a). Top ten+ list: (Re)thinking ontology in (post)qualitative research. Cultural Studies ⇔ Critical Methodologies, 16 (2), 125–131.

Lather, P. (2016b). Post-face: Cultural studies of numeracy. Cultural Studies ⇔ Critical Methodologies, 16 (5), 502–505.

Lather, P., & St. Pierre, E. (2013). Post-qualitative research. International Journal of Qualitative Studies in Education, 26 (6), 629–633.

Law, J. (2004). After method: The mess of social science research . London, UK: Routledge.

Lenz-Taguchi, H. (2012). A diffractive and Deleuzian approach to analysing interview data. Feminist Theory, 13 (3), 265–281.

Lenz-Taguchi, H. (2013). ‘Becoming molecular girl’: Transforming subjectivities in collaborative doctoral research studies as micro-politics in the academy. International Journal of Qualitative Studies in Education, 26 (9), 1101–1116.

Levy, G., Halse, C., & Wright, J. (2016). Down the methodological rabbit hole: Thinking diffractively with resistant data. Qualitative Research, 16 (2), 183–197.

Liao, J., & Markula, P. (2016). “The only thing i am guilty of is taking too many jump shots”: A Deleuzian media analysis of Diana Taurasi’s drug charge in 2010. Sociology of Sport Journal, 33 (2), 169–179.

Lupton, D. (2019b). ‘Things that matter’: Poetic inquiry and more-than-human health literacy. Qualitative Research in Sport, Exercise and Health , 1–16

Lupton, D. (2019d). Australian women’s use of health and fitness apps and wearable devices: a feminist new materialism analysis. Feminist Media Studies , 1–16

Lupton, D. (2020). The story completion method and more-than-human theory: Finding and using health information. SAGE Research Methods Cases.

Lury, C., & Wakeford, N. (Eds.). (2013). Inventive methods: The happening of the social . London, UK: Routledge.

Lynch, M. (2000). Against reflexivity as an academic virtue and source of privileged knowledge. Theory, Culture and Society, 17 (3), 26–54.

MacLure, M. (2013a). Researching without representation? Language and materiality in post-qualitative methodology. International Journal of Qualitative Studies in Education, 26 (6), 658–667.

Mancini, C., van der Linden, J., Bryan, J., & Stuart, A. (2012). Exploring interspecies sensemaking: Dog tracking semiotics and multispecies ethnography. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Ubicomp 2012 , 5–8 September 2012, Pittsburgh, PA, pp. 143–152.

Manning, E., & Massumi, B. (2014). Thought in the act: Passages in the ecology of experience . Minneapolis, MN: University of Minnesota Press.

Markula, P. (2011). “Folding”: A feminist intervention in mindful fitness. In E. Kennedy & P. Markula (Eds.), Women and exercise: The body, health and consumerism (pp. 60–78). London, UK/New York, NY: Routledge.

Markula, P. (2014). The moving body and social change. Cultural Studies? Critical Methodologies, 14 (5), 483–495.

Markula, P. (2019). What is new about new materialism for sport sociology? Reflections on body, movement, and culture. Sociology of Sport Sociology, 36 , 1–11.

Markula, P. (2020). Contextualizing the material, moving body. In J. Newman, H. Thorpe, & D. Andrews (Eds.), Sport, physical culture and the moving body: Materialisms, technologies and ecologies (pp. 47–68). New Brunswick, NJ: Rutgers University Press.

Marn, T. M., & Wolgemuth, J. R. (2016). Purposeful entanglements: A new materialist analysis of transformative interviews. Qualitative Inquiry, 23 (5), 365–374.

Martin, A., & Kamberelis, G. (2013). Mapping not tracing: Qualitative educational research with political teeth. International Journal of Qualitative Studies in Education, 26 (6), 668–679.

Masny, D. (2013). Rhizomatic pathways in qualitative research. Qualitative Inquiry, 19 , 339–348.

Mayes, E. (2019). The mis/use of ‘voice’ in (post)qualitative research with children and young people: Histories, politics and ethics. International Journal of Qualitative Studies in Education, 32 (1), 1191–1209.

Mazzei, L. (2013). A voice without organs: Interviewing in posthumanist research. International Journal of Qualitative Studies in Education, 26 (6), 732–740.

McDougall, A., Goldszmidt, M., Kinsella, E., Smith, S., & Lingard, L. (2016). Collaboration and entanglement: An actor-network theory analysis of team-based intraprofessional care for patients with advanced health failure. Social Science and Medicine, 164 , 108–117.

McKnight, L. (2016). Swimming lessons: Learning, new materialisms, posthumanism, and post qualitative research emerge through a pool poem. Journal of Curriculum and Pedagogy, 13 (3), 195–205.

Mol, A. (2002). The body multiple: Ontology in medical practice . Durham, NC: Duke University Press.

Monforte, J. (2018). What is new in new materialism for a newcomer? Qualitative Research in Sport, Exercise and Health, 10 (3), 378–190.

Monforte, J., & Smith, B. (2020). Conventional and postqualitative research: An invitation to dialogue. Qualitative Inquiry. https://doi.org/10.1177/1077800420962469

Murris, K. (Ed.). (2021). Navigating the postqualitative, new materialist and critical posthumanist terrain across disciplines: An introductory guide. Abingdon, UK: Routledge.

Neimanis, A. (2018). Nature represents itself: Bibliophilia in a changing climate. In V. Kirby (Ed.), What if culture was nature all along? (pp. 179–198). Edinburgh, Scotland: Edinburgh University Press.

Nordstrom, S. N. (2018). Antimethodology: Postqualitative genertive conventions. Qualitative Inquiry, 24 (3), 215–226.

Olive, R. (2020). Thinking the social through myself: Reflexivity in research practice. In B. Humberstone & H. Prince (Eds.), Research methods in outdoor studies . New York, NY: Routledge.

Olive, R., & Thorpe, H. (2017). Feminist ethnography and physical culture: Towards reflexive, political, and collaborative methods. In Physical culture, ethnography and the body . New York, NY: Routledge.

Palmer, F. (2016). Stories of Haka and women’s rugby in Aotearoa New Zealand: Weaving identities and ideologies together. The International Journal of the History of Sport, 33 (17), 2169–2184.

Parikka, J. (2013). Afterword: Cultural techniques and media studies. Theory, Culture and Society, 30 (6), 147–159.

Parry, D., & Johnson, C. (2007). Contextualizing leisure research to encompass complexity in lived leisure experience: The need for creative analytic practice. Leisure Sciences, 29 (2), 119–130.

Patai, D. (1994). Response: When method becomes power. In A. Gitlen (Ed.), Power and method (pp. 61–73). New York, NY: Routledge.

Pels, D. (2000). Reflexivity: One step up. Theory, Culture and Society, 17 (3), 1–25.

Phillips, D. K., & Larson, M. L. (2013). The teacher-student writing conference reimaged: Entangled becoming-writing conferencing. Gender and Education, 25 (6), 722–737.

Pigott, J., & Lyons, A. (2017). Shadows, undercurrents and the Aliveness Machines . In M. Bastian, O. Jones, N. Moore, & E. Roe (Eds.), Participatory research in more-than-human worlds (pp. 241–160). New York, NY: Routledge.

Pillow, W. (2003). Confession, catharsis, or cure? Rethinking the uses of reflexivity as methodological power in qualitative research. International Journal of Qualitative Studies in Education , 16(2), 175–196

Pringle, R. (2020). What can new materialisms do for the critical study of sport and physical culture (Who does this book think it is?). In J. Newman, H. Thorpe, & D. Andrews (Eds.), Sport, physical culture and the moving body: Materialisms, technologies, and ecologies (pp. 321–334). New Brunswick, NJ: Rutgers University Press.

Pringle, R., & Thorpe, H. (2017). Theory and Reflexivity. In M. Silk, D. L. Andrews, & H. Thorpe (Eds.), Routledge Handbook of Physical Cultural Studies (pp. 68–78). Abingdon, UK: Routledge.

Pryke, M., Rose, G., & Whatmore, S. (2003). Using social theory: Thinking through research . London, UK: SAGE.

Rautio, P. (2013). Children who carry stones in their pockets: On autotelic material practices in everyday life. Children’s Geographies, 11 (4), 394–408.

Ray, J. (2019). The postqualitative turn in physical cultural studies. Leisure Sciences, 41 (1), 91–107.

Reade, J. (2020). Keeping it raw on the ‘gram: Authenticity, relatability and digital intimacy in fitness cultures on Instagram. New Media & Society .

Rekret, P. (2016). A critique of new materialism: Ethics and ontology. Subjectivity, 9 , 225–245.

Rich, E., Lewis, S., & Miah, A. (2020). Digital health technologies, body pedagogies and material-discursive relations of young people’s learning about health. In D. Leahy, K. Fitzpatrick, & J. Wright (Eds.), Social theory and health education: Forging new insights in research . New York. NY: Routledge.

Safron, C. (2019). Reimagining health and fitness materials: An affective inquiry into collaging. Reconceptualizing Education Research Methodology, 2–3 (2), 40–58.

Scott, G., & Garner, R. (2013). Doing qualitative research: Design, methods and technologies . Upper Saddle River, NJ: Pearson.

Sehgal, M. (2014). Diffractive propositions: Reading Alfred North Whitehead with Donna Haraway and Karen Barad. Parallax, 20 (3), 188–201.

Shelton, S. A., Guyotte, K. W., & Flint, M. A. (2019). (Wo)monstrous suturing: Woman doctoral students cutting together/apart. Reconceptualizing Education Research Methodology, 2–3 (2), 111–146.

Shomura, C. (2017). Exploring the promise of new materialisms. Laterality, 6 (1), 1–7.

Søndergaard, D. M. (2016). New materialist analyses of virtual gaming, distributed violence, and recreational aggression. Cultural Studies ↔ Critical Methodologies , 16(2), 162–172.

Springgay, S., & Truman, S. (2017). Walking methodologies in a more-than-human world: WalkingLab . New York, NY: Routledge.

Springgay, S., & Truman, S. (2018). On the need for methods beyond proceduralism: Speculative middles, (in)tensions and response-ability in research. Qualitative Inquiry, 24 (3), 203–214.

St. Pierre, E. (2011). Post qualitative research: The critique and the coming after. In N. Denzin & L. Yvonna (Eds.), The SAGE handbook of qualitative research (pp. 611–635). Los Angeles, CA: SAGE.

St. Pierre, E. (2015). Practice for the ‘new’ in the new empiricisms, the new materialisms and post qualitative inquiry. In N. K. Denzin & M. D. Giardina (Eds.), Qualitative inquiry and the politics of research (pp. 75–96). London, UK: Routledge.

St. Pierre, E. (2016). The empirical and the new empiricisms. Cultural Studies ↔ Critical Methodologies, 16 (2), 111–124.

St. Pierre, E. (2019). Post Qualitative Inquiry, the refusal of method, and the risk of the new . Paper presented at the International Congress of Qualitative Inquiry, Urbana-Champaign, IL.

St. Pierre, E., Jackson, A., & Mazzei, L. (2016). New empiricisms and new materialisms: Conditions for new inquiry. Cultural Studies ↔ Critical Methodologies, 16 (2), 99–110.

Strom, K., Ringrose, J., Osgood, J., & Renold, E. (2019). PhEmaterialism: Response-able Research & Pedagogy. Reconceptualizing Educational Research Methodology, 3 (2), 1–39.

Thomas, H., & Ahmed, J. (Eds.). (2008). Cultural bodies: Ethnography and theory . Oxford, UK: Blackwell Publishing.

Thorpe, H. (2014). Moving bodies beyond the social/biological divide: Toward theoretical and transdisciplinary adventures. Sport, Education and Society , 19(5), 666–686.

Thorpe, H. (2016). Athletic women’s experiences of amenorrhea: Biomedical technologies, somatic ethics and embodied subjectivities. Sociology of Sport Journal , 33(1), 1–13.

Thorpe, H., & Clark, M. (2019). Gut Feminism, new materialisms and sportwomen’s embodied health: The case of RED-S in endurance athletes. Qualitative Research in Sport, Exercise and Health , 12(1), 1–17.

Thorpe, H., Clark, M., Brice, J., & Sims, S. (2020). The transdisciplinary health research apparatus: A Baradian account of knowledge boundaries and beyond. Health , 1–24. 

Toffoletti, K., & Thorpe, H. (2020). Bodies, gender, and digital affect in fitspiration media. Feminist Media Studies , 1–18.

Tompkins, K. W. (2016). On the limits and promise of new materialist philosophy. Laterality, 5 (1).

Ulmer, J. (2017). Posthumanism as research methodology: Inquiry in the Anthropocene. International Journal of Qualitative Studies in Education, 30 (9), 832–848.

van der Tuin, I. (2014). Diffraction as a methodology for feminist onto-epistemology: On encountering Chantal Chawaf and posthuman interpellation. Parallax, 20 , 231–244.

van Ingen, C. (2016). Getting lost as a way of knowing: The art of boxing within Shape Your Life. Qualitative Research in Sport, Exercise and Health, 8 (5), 472–486.

Warfield, K. (2016). Making the cut: Realist examinations of selfies and touch. Social Media and Society, 2 (2), 1–10.

Wolgemuth, J. R. (2016). Driving the paradigm: (Failing to Teach) methodological ambiguity, fluidity, and resistance in qualitative research. Qualitative Inquiry, 22 (6), 518–525.

Woodward, S. (2015). Object interviews, material imaginings and ‘unsettling’ methods: Interdisciplinary approaches to understanding materials and material culture. Qualitative Research, 16 (4), 359–374.

Download references

Author information

Authors and affiliations.

University of Waikato, Hamilton, New Zealand

Holly Thorpe & Julie Brice

University of New South Wales, Sydney, Australia

Marianne Clark

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Holly Thorpe .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Thorpe, H., Brice, J., Clark, M. (2020). New Materialist Methods and the Research Process. In: Feminist New Materialisms, Sport and Fitness. New Femininities in Digital, Physical and Sporting Cultures. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-56581-7_2

Download citation

DOI : https://doi.org/10.1007/978-3-030-56581-7_2

Published : 18 December 2020

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-030-56580-0

Online ISBN : 978-3-030-56581-7

eBook Packages : Social Sciences Social Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

The New Methodology

In the past few years there's been a blossoming of a new style of software methodology - referred to as agile methods. Alternatively characterized as an antidote to bureaucracy or a license to hack they've stirred up interest all over the software landscape. In this essay I explore the reasons for agile methods, focusing not so much on their weight but on their adaptive nature and their people-first orientation.

13 December 2005

process theory

From Nothing, to Monumental, to Agile

Separation of design and construction, the unpredictability of requirements, is predictability impossible, controlling an unpredictable process - iterations, the adaptive customer, plug-compatible programming units, programmers are responsible professionals, managing a people oriented process, the difficulty of measurement, the role of business leadership, the self-adaptive process, agile manifesto, xp (extreme programming), context driven testing, lean development, (rational) unified process, should you go agile.

Probably the most noticeable change to software process thinking in the last few years has been the appearance of the word 'agile'. We talk of agile software methods, of how to introduce agility into a development team, or of how to resist the impending storm of agilists determined to change well-established practices.

This new movement grew out of the efforts of various people who dealt with software process in the 1990s, found them wanting, and looked for a new approach to software process. Most of the ideas were not new, indeed many people believed that much successful software had been built that way for a long time. There was, however, a view that these ideas had been stifled and not been treated seriously enough, particularly by people interested in software process.

This essay was originally part of this movement. I originally published it in July 2000. I wrote it, like most of my essays, as part of trying to understand the topic. At that time I'd used Extreme Programming for several years after I was lucky enough to work with Kent Beck, Ron Jeffries, Don Wells, and above all the rest of the Chrysler C3 team in 1996. I had since had conversations and read books from other people who had similar ideas about software process, but had not necessarily wanted to take the same path as Extreme Programming. So in the essay I wanted to explore what were the similarities and differences between these methodologies.

My conclusion then, which I still believe now, is that there were some fundamental principles that united these methodologies, and these principles were a notable contrast from the assumptions of the established methodologies.

This essay has continued to be one of the most popular essays on my website, which means I feel somewhat bidden to keep it up to date. In its original form the essay both explored these differences in principles and provided a survey of agile methods as I then understood them. Too much has happened with agile methods since for me to keep up with the survey part, although I do provide some links to continue your explorations. The differences in principles still remain, and this discussion I've kept.

Most software development is a chaotic activity, often characterized by the phrase "code and fix". The software is written without much of an underlying plan, and the design of the system is cobbled together from many short term decisions. This actually works pretty well as the system is small, but as the system grows it becomes increasingly difficult to add new features to the system. Furthermore bugs become increasingly prevalent and increasingly difficult to fix. A typical sign of such a system is a long test phase after the system is "feature complete". Such a long test phase plays havoc with schedules as testing and debugging is impossible to schedule.

The original movement to try to change this introduced the notion of methodology. These methodologies impose a disciplined process upon software development with the aim of making software development more predictable and more efficient. They do this by developing a detailed process with a strong emphasis on planning inspired by other engineering disciplines - which is why they are often described as plan-driven methodologies .

Plan-driven methodologies have been around for a long time. They've not been noticeable for being terribly successful. They are even less noted for being popular. The most frequent criticism of these methodologies is that they are bureaucratic. There's so much stuff to do to follow the methodology that the whole pace of development slows down.

Agile methodologies developed as a reaction to these methodologies. For many people the appeal of these agile methodologies is their reaction to the bureaucracy of the plan-driven methodologies. These new methods attempt a useful compromise between no process and too much process, providing just enough process to gain a reasonable payoff.

The result of all of this is that agile methods have some significant changes in emphasis from plan-driven methods. The most immediate difference is that they are less document-oriented, usually emphasizing a smaller amount of documentation for a given task. In many ways they are rather code-oriented: following a route that says that the key part of documentation is source code.

However I don't think this is the key point about agile methods. Lack of documentation is a symptom of two much deeper differences:

  • Agile methods are adaptive rather than predictive. Plan-driven methods tend to try to plan out a large part of the software process in great detail for a long span of time, this works well until things change. So their nature is to resist change. The agile methods, however, welcome change. They try to be processes that adapt and thrive on change, even to the point of changing themselves.
  • Agile methods are people-oriented rather than process-oriented. The goal of plan-driven methods is to define a process that will work well whoever happens to be using it. Agile methods assert that no process will ever make up the skill of the development team, so the role of a process is to support the development team in their work.

In the following sections I'll explore these differences in more detail, so that you can understand what an adaptive and people-centered process is like, its benefits and drawbacks, and whether it's something you should use: either as a developer or customer of software.

Predictive versus Adaptive

The usual inspiration for methodologies is engineering disciplines such as civil or mechanical engineering. Such disciplines put a lot of emphasis on planning before you build. Such engineers will work on a series of drawings that precisely indicate what needs to be built and how these things need to be put together. Many design decisions, such as how to deal with the load on a bridge, are made as the drawings are produced. The drawings are then handed over to a different group, often a different company, to be built. It's assumed that the construction process will follow the drawings. In practice the constructors will run into some problems, but these are usually small.

Since the drawings specify the pieces and how they need to be put together, they act as the foundation for a detailed construction plan. Such a plan can figure out the tasks that need to be done and what dependencies exist between these tasks. This allows for a reasonably predictable schedule and budget for construction. It also says in detail how the people doing the construction work should do their work. This allows the construction to be less skilled intellectually, although they are often very skilled manually.

So what we see here are two fundamentally different activities. Design which is difficult to predict and requires expensive and creative people, and construction which is easier to predict. Once we have the design, we can plan the construction. Once we have the plan for the construction, we can then deal with construction in a much more predictable way. In civil engineering construction is much bigger in both cost and time than design and planning.

So the approach for software engineering methodologies looks like this: we want a predictable schedule that can use people with lower skills. To do this we must separate design from construction. Therefore we need to figure out how to do the design for software so that the construction can be straightforward once the planning is done.

So what form does this plan take? For many, this is the role of design notations such as the UML . If we can make all the significant decisions using the UML, we can build a construction plan and then hand these designs off to coders as a construction activity.

But here lies the crucial question. Can you get a design that is capable of turning the coding into a predictable construction activity? And if so, is cost of doing this sufficiently small to make this approach worthwhile?

All of this brings a few questions to mind. The first is the matter of how difficult it is to get a UML-like design into a state that it can be handed over to programmers. The problem with a UML-like design is that it can look very good on paper, yet be seriously flawed when you actually have to program the thing. The models that civil engineers use are based on many years of practice that are enshrined in engineering codes. Furthermore the key issues, such as the way forces play in the design, are amenable to mathematical analysis. The only checking we can do of UML-like diagrams is peer review. While this is helpful it leads to errors in the design that are often only uncovered during coding and testing. Even skilled designers, such as I consider myself to be, are often surprised when we turn such a design into software.

Another issue is that of comparative cost. When you build a bridge, the cost of the design effort is about 10% of the job, with the rest being construction. In software the amount of time spent in coding is much, much less. McConnell suggests that for a large project, only 15% of the project is code and unit test, an almost perfect reversal of the bridge building ratios. Even if you lump in all testing as part of construction, then design is still 50% of the work. This raises an important question about the nature of design in software compared to its role in other branches of engineering.

These kinds of questions led Jack Reeves to suggest that in fact the source code is a design document and that the construction phase is actually the use of the compiler and linker. Indeed anything that you can treat as construction can and should be automated.

This thinking leads to some important conclusions:

  • In software: construction is so cheap as to be free
  • In software all the effort is design, and thus requires creative and talented people
  • Creative processes are not easily planned, and so predictability may well be an impossible target.
  • We should be very wary of the traditional engineering metaphor for building software. It's a different kind of activity and requires a different process

There's a refrain I've heard on every problem project I've run into. The developers come to me and say "the problem with this project is that the requirements are always changing". The thing I find surprising about this situation is that anyone is surprised by it. In building business software requirements changes are the norm, the question is what we do about it.

One route is to treat changing requirements as the result of poor requirements engineering. The idea behind requirements engineering is to get a fully understood picture of the requirements before you begin building the software, get a customer sign-off to these requirements, and then set up procedures that limit requirements changes after the sign-off.

One problem with this is that just trying to understand the options for requirements is tough. It's even tougher because the development organization usually doesn't provide cost information on the requirements. You end up being in the situation where you may have some desire for a sun roof on your car, but the salesman can't tell you if it adds $10 to the cost of the car, or $10,000. Without much idea of the cost, how can you figure out whether you want to pay for that sunroof?

Estimation is hard for many reasons. Part of it is that software development is a design activity, and thus hard to plan and cost. Part of it is that the basic materials keep changing rapidly. Part of it is that so much depends on which individual people are involved, and individuals are hard to predict and quantify.

Software's intangible nature also cuts in. It's very difficult to see what value a software feature has until you use it for real. Only when you use an early version of some software do you really begin to understand what features are valuable and what parts are not.

This leads to the ironic point that people expect that requirements should be changeable. After all software is supposed to be soft. So not just are requirements changeable, they ought to be changeable. It takes a lot of energy to get customers of software to fix requirements. It's even worse if they've ever dabbled in software development themselves, because then they "know" that software is easy to change.

But even if you could settle all that and really could get an accurate and stable set of requirements you're probably still doomed. In today's economy the fundamental business forces are changing the value of software features too rapidly. What might be a good set of requirements now, is not a good set in six months time. Even if the customers can fix their requirements, the business world isn't going to stop for them. And many changes in the business world are completely unpredictable: anyone who says otherwise is either lying, or has already made a billion on stock market trading.

Everything else in software development depends on the requirements. If you cannot get stable requirements you cannot get a predictable plan.

In general, no. There are some software developments where predictability is possible. Organizations such as NASA's space shuttle software group are a prime example of where software development can be predictable. It requires a lot of ceremony, plenty of time, a large team, and stable requirements. There are projects out there that are space shuttles. However I don't think much business software fits into that category. For this you need a different kind of process.

One of the big dangers is to pretend that you can follow a predictable process when you can't. People who work on methodology are not very good at identifying boundary conditions: the places where the methodology passes from appropriate to inappropriate. Most methodologists want their methodologies to be usable by everyone, so they don't understand nor publicize their boundary conditions. This leads to people using a methodology in the wrong circumstances, such as using a predictable methodology in a unpredictable situation.

There's a strong temptation to do that. Predictability is a very desirable property. However if you believe you can be predictable when you can't, it leads to situations where people build a plan early on, then don't properly handle the situation where the plan falls apart. You see the plan and reality slowly drifting apart. For a long time you can pretend that the plan is still valid. But at some point the drift becomes too much and the plan falls apart. Usually the fall is painful.

So if you are in a situation that isn't predictable you can't use a predictive methodology. That's a hard blow. It means that many of the models for controlling projects, many of the models for the whole customer relationship, just aren't true any more. The benefits of predictability are so great, it's difficult to let them go. Like so many problems the hardest part is simply realizing that the problem exists.

However letting go of predictability doesn't mean you have to revert to uncontrollable chaos. Instead you need a process that can give you control over an unpredictability. That's what adaptivity is all about.

So how do we control ourselves in an unpredictable world? The most important, and still difficult part is to know accurately where we are. We need an honest feedback mechanism which can accurately tell us what the situation is at frequent intervals.

The key to this feedback is iterative development. This is not a new idea . Iterative development has been around for a while under many names: incremental, evolutionary, staged, spiral... lots of names. The key to iterative development is to frequently produce working versions of the final system that have a subset of the required features. These working systems are short on functionality, but should otherwise be faithful to the demands of the final system. They should be fully integrated and as carefully tested as a final delivery.

The point of this is that there is nothing like a tested, integrated system for bringing a forceful dose of reality into any project. Documents can hide all sorts of flaws. Untested code can hide plenty of flaws. But when people actually sit in front of a system and work with it, then flaws become truly apparent: both in terms of bugs and in terms of misunderstood requirements.

Iterative development makes sense in predictable processes as well. But it is essential in adaptive processes because an adaptive process needs to be able to deal with changes in required features. This leads to a style of planning where long term plans are very fluid, and the only stable plans are short term plans that are made for a single iteration. Iterative development gives you a firm foundation in each iteration that you can base your later plans around.

A key question for this is how long an iteration should be. Different people give different answers. XP suggests iterations of one or two weeks. SCRUM suggests a length of a month. Crystal may stretch further. The tendency, however, is to make each iteration as short as you can get away with. This provides more frequent feedback, so you know where you are more often.

This kind of adaptive process requires a different kind of relationship with a customer than the ones that are often considered, particularly when development is done by a separate firm. When you hire a separate firm to do software development, most customers would prefer a fixed-price contract. Tell the developers what they want, ask for bids, accept a bid, and then the onus is on the development organization to build the software.

A fixed price contract requires stable requirements and hence a predictive process. Adaptive processes and unstable requirements imply you cannot work with the usual notion of fixed-price. Trying to fit a fixed price model to an adaptive process ends up in a very painful explosion. The nasty part of this explosion is that the customer gets hurt every bit as much as the software development company. After all the customer wouldn't be wanting some software unless their business needed it. If they don't get it their business suffers. So even if they pay the development company nothing, they still lose. Indeed they lose more than they would pay for the software (why would they pay for the software if the business value of that software were less?)

So there's dangers for both sides in signing the traditional fixed price contract in conditions where a predictive process cannot be used. This means that the customer has to work differently.

This doesn't mean that you can't fix a budget for software up-front. What it does mean is that you cannot fix time, price and scope. The usual agile approach is to fix time and price, and to allow the scope to vary in a controlled manner.

In an adaptive process the customer has much finer-grained control over the software development process. At every iteration they get both to check progress and to alter the direction of the software development. This leads to much closer relationship with the software developers, a true business partnership. This level of engagement is not for every customer organization, nor for every software developer; but it's essential to make an adaptive process work properly.

All this yields a number of advantages for the customer. For a start they get much more responsive software development. A usable, although minimal, system can go into production early on. The customer can then change its capabilities according to changes in the business, and also from learning from how the system is used in reality.

Every bit as important as this is greater visibility into the true state of the project. The problem with predictive processes is that project quality is measured by conformance to plan. This makes it difficult for people to signal when reality and the plan diverge. The common result is a big slip in the schedule late in the project. In an agile project there is a constant reworking of the plan with every iteration. If bad news is lurking it tends to come earlier, when there is still time to do something about it. Indeed this risk control is a key advantage of iterative development.

Agile methods take this further by keeping the iteration lengths small, but also by seeing these variations in a different way. Mary Poppendieck summed up this difference in viewpoint best for me with her phrase "A late change in requirements is a competitive advantage" . I think most people have noticed that it's very difficult for business people to really understand what they need from software in the beginning. Often we see that people learn during the process what elements are valuable and which ones aren't. Often the most valuable features aren't at all obvious until customer have had a chance to play with the software. Agile methods seek to take advantage of this, encouraging business people to learn about their needs as the system gets built, and to build the system in such a way that changes can be incorporated quickly.

All this has an important bearing what constitutes a successful project. A predictive project is often measured by how well it met its plan. A project that's on-time and on-cost is considered to be a success. This measurement is nonsense to an agile environment. For agilists the question is business value - did the customer get software that's more valuable to them than the cost put into it. A good predictive project will go according to plan, a good agile project will build something different and better than the original plan foresaw.

Putting People First

Executing an adaptive process is not easy. In particular it requires a very effective team of developers. The team needs to be effective both in the quality of the individuals, and in the way the team blends together. There's also an interesting synergy: not just does adaptivity require a strong team, most good developers prefer an adaptive process.

One of the aims of traditional methodologies is to develop a process where the people involved are replaceable parts. With such a process you can treat people as resources who are available in various types. You have an analyst, some coders, some testers, a manager. The individuals aren't so important, only the roles are important. That way if you plan a project it doesn't matter which analyst and which testers you get, just that you know how many you have so you know how the number of resources affects your plan.

But this raises a key question: are the people involved in software development replaceable parts? One of the key features of agile methods is that they reject this assumption.

Perhaps the most explicit rejection of people as resources is Alistair Cockburn. In his paper Characterizing People as Non-Linear, First-Order Components in Software Development , he makes the point that predictable processes require components that behave in a predictable way. However people are not predictable components. Furthermore his studies of software projects have led him to conclude the people are the most important factor in software development.

In the title, [of his article] I refer to people as "components". That is how people are treated in the process / methodology design literature. The mistake in this approach is that "people" are highly variable and non-linear, with unique success and failure modes. Those factors are first-order, not negligible factors. Failure of process and methodology designers to account for them contributes to the sorts of unplanned project trajectories we so often see. -- [cockburn-non-linear]

One wonders if not the nature of software development works against us here. When we're programming a computer, we control an inherently predictable device. Since we're in this business because we are good at doing that, we are ideally suited to messing up when faced with human beings.

Although Cockburn is the most explicit in his people-centric view of software development, the notion of people first is a common theme with many thinkers in software. The problem, too often, is that methodology has been opposed to the notion of people as the first-order factor in project success.

This creates a strong positive feedback effect. If you expect all your developers to be plug-compatible programming units, you don't try to treat them as individuals. This lowers morale (and productivity). The good people look for a better place to be, and you end up with what you desire: plug-compatible programming units.

Deciding that people come first is a big decision, one that requires a lot of determination to push through. The notion of people as resources is deeply ingrained in business thinking, its roots going back to the impact of Frederick Taylor's Scientific Management approach. In running a factory, this Taylorist approach may make sense. But for the highly creative and professional work, which I believe software development to be, this does not hold. (And in fact modern manufacturing is also moving away from the Taylorist model.)

A key part of the Taylorist notion is that the people doing the work are not the people who can best figure out how best to do that work. In a factory this may be true for several reasons. Part of this is that many factory workers are not the most intelligent or creative people, in part this is because there is a tension between management and workers in that management makes more money when the workers make less.

Recent history increasingly shows us how untrue this is for software development. Increasingly bright and capable people are attracted to software development, attracted by both its glitz and by potentially large rewards. (Both of which tempted me away from electronic engineering.) Despite the downturn of the early 00's, there is still a great deal of talent and creativity in software development.

(There may well be a generational effect here. Some anecdotal evidence makes me wonder if more brighter people have ventured into software engineering in the last fifteen years or so. If so this would be a reason for why there is such a cult of youth in the computer business, like most cults there needs to be a grain of truth in it.)

When you want to hire and retain good people, you have to recognize that they are competent professionals. As such they are the best people to decide how to conduct their technical work. The Taylorist notion of a separate planning department that decides how to do things only works if the planners understand how to do the job better than those doing it. If you have bright, motivated people doing the job then this does not hold.

People orientation manifests itself in a number of different ways in agile processes. It leads to different effects, not all of them are consistent.

One of the key elements is that of accepting the process rather than the imposition of a process. Often software processes are imposed by management figures. As such they are often resisted, particularly when the management figures have had a significant amount of time away from active development. Accepting a process requires commitment, and as such needs the active involvement of all the team.

This ends up with the interesting result that only the developers themselves can choose to follow an adaptive process. This is particularly true for XP, which requires a lot of discipline to execute. Crystal considers itself as a less disciplined approach that's appropriate for a wider audience.

Another point is that the developers must be able to make all technical decisions. XP gets to the heart of this where in its planning process it states that only developers may make estimates on how much time it will take to do some work.

Such technical leadership is a big shift for many people in management positions. Such an approach requires a sharing of responsibility where developers and management have an equal place in the leadership of the project. Notice that I say equal . Management still plays a role, but recognizes the expertise of developers.

An important reason for this is the rate of change of technology in our industry. After a few years technical knowledge becomes obsolete. This half life of technical skills is without parallel in any other industry. Even technical people have to recognize that entering management means their technical skills will wither rapidly. Ex-developers need to recognize that their technical skills will rapidly disappear and they need to trust and rely on current developers.

If you have a process where the people who say how work should be done are different from the people who actually do it, the leaders need some way of measuring how effective the doers are. In Scientific Management there was a strong push to develop objective approaches to measuring the output of people.

This is particularly relevant to software because of the difficulty of applying measurement to software. Despite our best efforts we are unable to measure the most simple things about software, such as productivity. Without good measures for these things, any kind of external control is doomed.

Introducing measured management without good measures leads to its own problems. Robert Austin made an excellent discussion of this. He points out that when measuring performance you have to get all the important factors under measurement. Anything that's missing has the inevitable result that the doers will alter what they do to produce the best measures, even if that clearly reduces the true effectiveness of what they do. This measurement dysfunction is the Achilles heel of measurement-based management.

Austin's conclusion is that you have to choose between measurement-based management and delegatory management (where the doers decide how to do the work). Measurement-based management is best suited to repetitive simple work, with low knowledge requirements and easily measured outputs - exactly the opposite of software development.

The point of all this is that traditional methods have operated under the assumption that measurement-based management is the most efficient way of managing. The agile community recognizes that the characteristics of software development are such that measurement based management leads to very high levels of measurement dysfunction. It's actually more efficient to use a delegatory style of management, which is the kind of approach that is at the center of the agilist viewpoint.

But the technical people cannot do the whole process themselves. They need guidance on the business needs. This leads to another important aspect of adaptive processes: they need very close contact with business expertise.

This goes beyond most projects' involvement of the business role. Agile teams cannot exist with occasional communication . They need continuous access to business expertise. Furthermore this access is not something that is handled at a management level, it is something that is present for every developer. Since developers are capable professionals in their own discipline, they need to be able to work as equals with other professionals in other disciplines.

A large part of this, of course, is due to the nature of adaptive development. Since the whole premise of adaptive development is that things change quickly, you need constant contact to advise everybody of the changes.

There is nothing more frustrating to a developer than seeing their hard work go to waste. So it's important to ensure that there is good quality business expertise that is both available to the developer and is of sufficient quality that the developer can trust them.

So far I've talked about adaptivity in the context of a project adapting its software frequently to meet the changing requirements of its customers. However there's another angle to adaptivity: that of the process changing over time. A project that begins using an adaptive process won't have the same process a year later. Over time, the team will find what works for them, and alter the process to fit.

The first part of self-adaptivity is regular reviews of the process. Usually you do these with every iteration. At the end of each iteration, have a short meeting and ask yourself the following questions (culled from Norm Kerth )

  • What did we do well?
  • What have we learned?
  • What can we do better?
  • What puzzles us?

These questions will lead you to ideas to change the process for the next iteration. In this way a process that starts off with problems can improve as the project goes on, adapting better to the team that uses it.

If self-adaptivity occurs within a project, it's even more marked across an organization. A consequence of self-adaptivity is that you should never expect to find a single corporate methodology. Instead each team should not just choose their own process, but should also actively tune their process as they proceed with the project. While both published processes and the experience of other projects can act as an inspiration and a baseline, the developers professional responsibility is to adapt the process to the task at hand.

Flavors of Agile Development

The term 'agile' refers to a philosophy of software development. Under this broad umbrella sits many more specific approaches such as Extreme Programming, Scrum, Lean Development, etc. Each of these more particular approaches has its own ideas, communities and leaders. Each community is a distinct group of its own but to be correctly called agile it should follow the same broad principles. Each community also borrows from ideas and techniques from each other. Many practitioners move between different communities spreading different ideas around - all in all it's a complicated but vibrant ecosystem.

So far I've given my take on the overall picture of my definition of agile. Now I want to introduce some of the different agile communities. I can only give a quick overview here, but I do include references so you can dig further if you like.

Since I'm about to start giving more references, this is a good point to point out some sources for general information on agile methods. The web-center is the Agile Alliance a non-profit set up to encourage and research agile software development. For books I'd suggest overviews by Alistair Cockburn and Jim Highsmith . Craig Larman's book on agile development contains a very useful history of iterative development. For more of my views on agile methods see my agile guide .

The following list is by no means complete. It reflects a personal selection of the flavors of agile that have most interested and influenced me over the last decade or so.

The term 'agile' got hijacked for this activity in early 2001 when a bunch of people who had been heavily involved in this work got together to exchange ideas and came up with the Manifesto for Agile Software Development .

Prior to this workshop a number of different groups had been developing similar ideas about software development. Most, but by no means all, of this work had come out of the Object-Oriented software community that had long advocated iterative development approaches. This essay was originally written in 2000 to try to pull together these various threads. At that time there was no common name for these approaches, but the moniker 'lightweight' had grown up around them. Many of the people involved didn't feel this was a good term as it didn't accurately convey the essence of what these approaches were about.

There had been some talking about broader issues in these approaches in 2000 at a workshop hosted by Kent Beck in Oregon. Although this workshop was focused on Extreme Programming (the community that at that time had gained the most attention) several non XPers had attended. One the discussions that came up was whether it was better for XP to be a broad or concrete movement. Kent preferred a more focused cohesive community.

The workshop was organized, if I remember correctly, primarily by Jim Highsmith and Bob Martin. They contacted people who they felt were active in communities with these similar ideas and got seventeen of them together for the Snowbird workshop. The initial idea was just to get together and build better understanding of each others' approaches. Robert Martin was keen to get some statement, a manifesto that could be used to rally the industry behind these kinds of techniques. We also decided we wanted to choose a name to act as an umbrella name for the various approaches.

During the course of the workshop we decided to use 'agile' as the umbrella name, and came up with values part of the manifesto. The principles section was started at the workshop but mostly developed on a wiki afterwards.

The effort clearly struck a nerve, I think we were all very surprised at the degree of attention and appreciation the manifesto got. Although the manifesto is hardly a rigorous definition of agile, it does provide a focusing statement that helps concentrate the ideas. Shortly after we finished the manifesto Jim Highsmith and I wrote an article for SD Magazine that provided some commentary to the manifesto.

Later that year, most of the seventeen who wrote the manifesto got back together again, with quite a few others, at OOPSLA 2001. There was a suggestion that the manifesto authors should begin some on-going agile movement, but the authors agreed that they were just the people who happened to turn up for that workshop and produced that manifesto. There was no way that that group could claim leadership of the whole agile community. We had helped launch the ship and should let it go for whoever who wanted to sail in her to do so. So that was the end of the seventeen manifesto authors as an organized body.

One next step that did follow, with the active involvement of many of these authors, was the formation of the agile alliance . This group is a non-profit group intended to promote and research agile methods. Amongst other things it sponsors an annual conference in the US.

During the early popularity of agile methods in the late 1990's, Extreme Programming was the one that got the lion's share of attention. In many ways it still does.

The roots of XP lie in the Smalltalk community, and in particular the close collaboration of Kent Beck and Ward Cunningham in the late 1980's. Both of them refined their practices on numerous projects during the early 90's, extending their ideas of a software development approach that was both adaptive and people-oriented.

Kent continued to develop his ideas during consulting engagements, in particular the Chrysler C3 project , which has since become known as the creation project of extreme programming. He started using the term 'extreme programming' around 1997. (C3 also marked my initial contact with Extreme Programming and the beginning of my friendship with Kent.)

During the late 1990's word of Extreme Programming spread, initially through descriptions on newsgroups and Ward Cunningham's wiki, where Kent and Ron Jeffries (a colleague at C3) spent a lot of time explaining and debating the various ideas. Finally a number of books were published towards the end of the 90's and start of 00's that went into some detail explaining the various aspects of the approach. Most of these books took Kent Beck's white book as their foundation. Kent produced a second edition of the white book in 2004 which was a significant re-articulation of the approach.

XP begins with five values (Communication, Feedback, Simplicity, Courage, and Respect). It then elaborates these into fourteen principles and again into twenty-four practices. The idea is that practices are concrete things that a team can do day-to-day, while values are the fundamental knowledge and understanding that underpins the approach. Values without practices are hard to apply and can be applied in so many ways that it's hard to know where to start. Practices without values are rote activities without a purpose. Both values and practices are needed, but there's a big gap between them - the principles help bridge that gap. Many of XP's practices are old, tried and tested techniques, yet often forgotten by many, including most planned processes. As well as resurrecting these techniques, XP weaves them into a synergistic whole where each one is reinforced by the others and given purpose by the values.

One of the most striking, as well as initially appealing to me, is its strong emphasis on testing. While all processes mention testing, most do so with a pretty low emphasis. However XP puts testing at the foundation of development, with every programmer writing tests as they write their production code. The tests are integrated into a continuous integration and build process which yields a highly stable platform for future development. XP's approach here, often described under the heading of Test Driven Development (TDD) has been influential even in places that haven't adopted much else of XP.

There's a great deal of publications about extreme programming. One area of confusion, however, is the shift between the first and second edition of the white book. I said above that the second edition is a 're-articulation' of extreme programming, in that the approach is still the same but it is described in a different style. The first edition (with four values, twelve practices and some important but mostly-ignored principles) had a huge influence on the software industry and most descriptions of extreme programming were written based on the first edition's description. Keep that in mind as you read material on XP, particularly if it was prepared prior to 2005. Indeed most of the common web descriptions of XP are based on the first edition.

The natural starting place to discover more is the second edition of the white book . This book explains the background and practices of XP in a short (160 page) package. Kent Beck edited a multi-colored series of books on extreme programming around the turn of the century, if forced to pick one to suggest I'd go for the purple one , remember that like most material it's based on the first edition.

There's a lot of material on the web about XP but most of it is based on the first edition. One of the few descriptions I know of that takes account of the second edition is a paper on The New XP (PDF) by Michele Marchesi who hosted the original XP conferences in Sardinia. For discussion on XP there is a yahoo mailing list .

My involvement in the early days and friendships within the XP community mean that I have a distinct familiarity, fondness and bias towards XP. I think its influence owes to marrying the principles of agile development with a solid set of techniques for actually carrying them out. Much of the early writings on agile neglected the latter, raising questions about whether the agile ideas were really possible. XP provided the tools by which the hopes of agility could be realized.

Scrum also developed in the 80's and 90's primarily with OO development circles as a highly iterative development methodology. It's most well known developers were Ken Schwaber, Jeff Sutherland, and Mike Beedle.

Scrum concentrates on the management aspects of software development, dividing development into thirty day iterations (called 'sprints') and applying closer monitoring and control with daily scrum meetings. It places much less emphasis on engineering practices and many people combine its project management approach with extreme programming's engineering practices. (XP's management practices aren't really very different.)

Ken Schwaber is one of the most active proponents of Scrum, his website is a good place to start looking for more information and his book is probably the best first reference.

Alistair Cockburn has long been one of the principal voices in the agile community. He developed the Crystal family of software development methods as a group of approaches tailored to different size teams. Crystal is seen as a family because Alistair believes that different approaches are required as teams vary in size and the criticality of errors changes.

Despite their variations all crystal approaches share common features. All crystal methods have three priorities: safety (in project outcome), efficiency, habitability (developers can live with crystal). They also share common properties, of which the most important three are: Frequent Delivery, Reflective Improvement, and Close Communication.

The habitability priority is an important part of the crystal mind-set. Alistair's quest (as I see it) is looking for what is the least amount of process you can do and still succeed with an underlying assumption of low-discipline that is inevitable with humans. As a result Alistair sees Crystal as requiring less discipline than extreme programming, trading off less efficiency for a greater habitability and reduced chances of failure.

Despite Crystal's outline, there isn't a comprehensive description of all its manifestations. The most well described is Crystal Clear , which has a modern book description. There is also a wiki for further material and discussion of Crystal.

From the beginning it's been software developers who have been driving the agile community. However many other people are involved in software development and are affected by this new movement. One obvious such group is testers, who often live in a world very much contained by waterfall thinking. With common guidelines that state that the role of testing is to ensure conformance of software to up-front written specifications, the role of testers in an agile world is far from clear.

As it turns out, several people in the testing community have been questioning much of mainstream testing thinking for quite a while. This has led to a group known as context-driven testing. The best description of this is the book Lessons Learned in Software Testing . This community is also very active on the web, take a look at sites hosted by Brian Marick (one of the authors of the agile manifesto), Brett Pettichord , James Bach , and Cem Kaner .

I remember a few years ago giving a talk about agile methods at the Software Development conference and talking to an eager woman about parallels between the agile ideas and lean movement in manufacturing. Mary Poppendieck (and husband Tom) have gone on to be active supporters of the agile community, in particular looking at the overlaps and inspirations between lean production and software development.

The lean movement in manufacturing was pioneered by Taiichi Ohno at Toyota and is often known as the Toyota Production System. Lean production was an inspiration to many of the early agilists - the Poppendiecks are most notable to describing how these ideas interact. In general I'm very wary of these kinds of reasoning by analogy, indeed the engineering separation between design and construction got us into this mess in the first place. However analogies can lead to good ideas and I think the lean ideas have introduced many useful ideas and tools into the agile movement.

The Poppendiecks' book and website are the obvious starting points for more information.

Another well-known process to have come out of the object-oriented community is the Rational Unified Process (sometimes just referred to as the Unified Process). The original idea was that like the UML unified modeling languages the UP could unify software processes. Since RUP appeared about the same time as the agile methods, there's a lot of discussion about whether the two are compatible.

RUP is a very large collection of practices and is really a process framework rather than a process. Rather than give a single process for software development it seeks to provide a common set of practices for teams to choose from for an individual project. As a result a team's first step using RUP should be to define their individual process, or as RUP calls it, a development case .

The key common aspects of RUP is that it is Use Case Driven (development is driven through user-visible features), iterative, and architecture centric (there's a priority to building a architecture early on that will last the project through).

My experience with RUP is that its problem is its infinite variability. I've seen descriptions of RUP usage that range from rigid waterfall with 'analysis iterations' to picture perfect agile. It's struck me that the desire of people to market the RUP as the single process led to a result where people can do just about anything and call it RUP - resulting in RUP being a meaningless phrase.

Despite all this there are some very strong people in the RUP community that are very much aligned with agile thinking. I've been impressed in all my meeting with Phillippe Kruchten and his book is best starting point for RUP. Craig Larman has also developed descriptions of working with RUP in an agile style in his popular introductory book on OO design.

Using an agile method is not for everyone. There are a number of things to bear in mind if you decide to follow this path. However I certainly believe that these methodologies are widely applicable and should be used by more people than currently consider them.

In today's environment, the most common methodology is code and fix. Applying more discipline than chaos will almost certainly help, and the agile approach has the advantage that it is much less of a step than using a heavyweight method. Here the light weight of agile methods is an advantage. Simpler processes are more likely to be followed when you are used to no process at all.

For someone new to agile methods, the question is where to start. As with any new technology or process, you need to make your own evaluation of it. This allows you to see how it fits into your environment. As a result much of my advice here follows that I've given for other new approaches, bringing back memories of when I was first talking about Object-Oriented techniques.

The first step is to find suitable projects to try agile methods out with. Since agile methods are so fundamentally people-oriented, it's essential that you start with a team that wants to try and work in an agile way. Not just is a reluctant team more difficult to work with, imposing agile methods on reluctant people is fundamentally at odds with the whole notion of agile development.

It's valuable to also have customers (those who need the software) who want to work in this kind of collaborative way. If customers don't collaborate, then you won't see the full advantages of an adaptive process. Having said that we've found on several occasions that we've worked with customers who didn't want to collaborate, but changed their mind over the first few months as they begun to understand the agile approach.

A lot of people claim that agile methods can't be used on large projects. We (Thoughtworks) have had good success with agile projects with around 100 people and multiple continents. Despite this I would suggest picking something smaller to start with. Large projects are inherently more difficult anyway, so it's better to start learning on a project of a more manageable size.

Some people advise picking a project with little business impact to start with, that way if anything goes wrong then there's less damage. However an unimportant project often makes a poor test since nobody cares much about the outcome. I prefer to advise people to take a project that's a little bit more critical than you are comfortable with.

Perhaps the most important thing you can do is find someone more experienced in agile methods to help you learn. Whenever anyone does anything new they inevitably make mistakes. Find someone who has already made lots of mistakes so you can avoid making those yourself. Again this is something true for any new technology or technique, a good mentor is worth her weight in gold. Of course this advice is self serving since Thoughtworks and many of my friends in the industry do mentoring on agile methods. That doesn't alter the fact that I strongly believe in the importance of finding a good mentor.

And once you've found a good mentor, follow their advice. It's very easy to second guess much of this and I've learned from experience that many techniques can't really be understood until you've made a reasonable attempt to try them out. One of the best examples I heard was a client of ours who decided to trial extreme programming for a couple of months. During that period they made it clear that they would do whatever the mentor said - even if they thought it was a bad idea. At the end of that trial period they would stop and decide if they wanted to carry on with any of the ideas or revert to the previous way of working. (In case you were wondering they decided to carry on with XP.)

One of the open questions about agile methods is where the boundary conditions lie. One of the problems with any new technique is that you aren't really aware of where the boundary conditions until you cross over them and fail. Agile methods are still too young to see enough action to get a sense of where the boundaries are. This is further compounded by the fact that it's so hard to decide what success and failure mean in software development, as well as too many varying factors to easily pin down the source of problems.

So where should you not use an agile method? I think it primarily comes down to the people. If the people involved aren't interested in the kind of intense collaboration that agile working requires, then it's going to be a big struggle to get them to work with it. In particular I think that this means you should never try to impose agile working on a team that doesn't want to try it.

There's been lots of experience with agile methods over the last ten years. At Thoughtworks we always use an agile approach if our clients are willing, which most of the time they are. I (and we) continue to be big fans of this way of working.

13 December 2005: General overhaul of the paper. Changed list of methodologies to a survey of flavors of agile.

April 2003: Revised several sections. Added section on difficulty of measurement and context driven testing.

June 2002: Updated references

November 2001: Updated some recent references

March 2001: Updated to reflect the appearance of the Agile Alliance

November 2000: Updated section on ASD and added sections on DSDM and RUP

December 2000: Abridged version published in Software Development magazine under the title of "Put Your Process on a Diet"

July 2000: Original Publication on martinfowler.com

  • How it works

Published by Nicolas at March 21st, 2024 , Revised On March 12, 2024

The Ultimate Guide To Research Methodology

Research methodology is a crucial aspect of any investigative process, serving as the blueprint for the entire research journey. If you are stuck in the methodology section of your research paper , then this blog will guide you on what is a research methodology, its types and how to successfully conduct one. 

Table of Contents

What Is Research Methodology?

Research methodology can be defined as the systematic framework that guides researchers in designing, conducting, and analyzing their investigations. It encompasses a structured set of processes, techniques, and tools employed to gather and interpret data, ensuring the reliability and validity of the research findings. 

Research methodology is not confined to a singular approach; rather, it encapsulates a diverse range of methods tailored to the specific requirements of the research objectives.

Here is why Research methodology is important in academic and professional settings.

Facilitating Rigorous Inquiry

Research methodology forms the backbone of rigorous inquiry. It provides a structured approach that aids researchers in formulating precise thesis statements , selecting appropriate methodologies, and executing systematic investigations. This, in turn, enhances the quality and credibility of the research outcomes.

Ensuring Reproducibility And Reliability

In both academic and professional contexts, the ability to reproduce research outcomes is paramount. A well-defined research methodology establishes clear procedures, making it possible for others to replicate the study. This not only validates the findings but also contributes to the cumulative nature of knowledge.

Guiding Decision-Making Processes

In professional settings, decisions often hinge on reliable data and insights. Research methodology equips professionals with the tools to gather pertinent information, analyze it rigorously, and derive meaningful conclusions.

This informed decision-making is instrumental in achieving organizational goals and staying ahead in competitive environments.

Contributing To Academic Excellence

For academic researchers, adherence to robust research methodology is a hallmark of excellence. Institutions value research that adheres to high standards of methodology, fostering a culture of academic rigour and intellectual integrity. Furthermore, it prepares students with critical skills applicable beyond academia.

Enhancing Problem-Solving Abilities

Research methodology instills a problem-solving mindset by encouraging researchers to approach challenges systematically. It equips individuals with the skills to dissect complex issues, formulate hypotheses , and devise effective strategies for investigation.

Understanding Research Methodology

In the pursuit of knowledge and discovery, understanding the fundamentals of research methodology is paramount. 

Basics Of Research

Research, in its essence, is a systematic and organized process of inquiry aimed at expanding our understanding of a particular subject or phenomenon. It involves the exploration of existing knowledge, the formulation of hypotheses, and the collection and analysis of data to draw meaningful conclusions. 

Research is a dynamic and iterative process that contributes to the continuous evolution of knowledge in various disciplines.

Types of Research

Research takes on various forms, each tailored to the nature of the inquiry. Broadly classified, research can be categorized into two main types:

  • Quantitative Research: This type involves the collection and analysis of numerical data to identify patterns, relationships, and statistical significance. It is particularly useful for testing hypotheses and making predictions.
  • Qualitative Research: Qualitative research focuses on understanding the depth and details of a phenomenon through non-numerical data. It often involves methods such as interviews, focus groups, and content analysis, providing rich insights into complex issues.

Components Of Research Methodology

To conduct effective research, one must go through the different components of research methodology. These components form the scaffolding that supports the entire research process, ensuring its coherence and validity.

Research Design

Research design serves as the blueprint for the entire research project. It outlines the overall structure and strategy for conducting the study. The three primary types of research design are:

  • Exploratory Research: Aimed at gaining insights and familiarity with the topic, often used in the early stages of research.
  • Descriptive Research: Involves portraying an accurate profile of a situation or phenomenon, answering the ‘what,’ ‘who,’ ‘where,’ and ‘when’ questions.
  • Explanatory Research: Seeks to identify the causes and effects of a phenomenon, explaining the ‘why’ and ‘how.’

Data Collection Methods

Choosing the right data collection methods is crucial for obtaining reliable and relevant information. Common methods include:

  • Surveys and Questionnaires: Employed to gather information from a large number of respondents through standardized questions.
  • Interviews: In-depth conversations with participants, offering qualitative insights.
  • Observation: Systematic watching and recording of behaviour, events, or processes in their natural setting.

Data Analysis Techniques

Once data is collected, analysis becomes imperative to derive meaningful conclusions. Different methodologies exist for quantitative and qualitative data:

  • Quantitative Data Analysis: Involves statistical techniques such as descriptive statistics, inferential statistics, and regression analysis to interpret numerical data.
  • Qualitative Data Analysis: Methods like content analysis, thematic analysis, and grounded theory are employed to extract patterns, themes, and meanings from non-numerical data.

The research paper we write have:

  • Precision and Clarity
  • Zero Plagiarism
  • High-level Encryption
  • Authentic Sources

Choosing a Research Method

Selecting an appropriate research method is a critical decision in the research process. It determines the approach, tools, and techniques that will be used to answer the research questions. 

Quantitative Research Methods

Quantitative research involves the collection and analysis of numerical data, providing a structured and objective approach to understanding and explaining phenomena.

Experimental Research

Experimental research involves manipulating variables to observe the effect on another variable under controlled conditions. It aims to establish cause-and-effect relationships.

Key Characteristics:

  • Controlled Environment: Experiments are conducted in a controlled setting to minimize external influences.
  • Random Assignment: Participants are randomly assigned to different experimental conditions.
  • Quantitative Data: Data collected is numerical, allowing for statistical analysis.

Applications: Commonly used in scientific studies and psychology to test hypotheses and identify causal relationships.

Survey Research

Survey research gathers information from a sample of individuals through standardized questionnaires or interviews. It aims to collect data on opinions, attitudes, and behaviours.

  • Structured Instruments: Surveys use structured instruments, such as questionnaires, to collect data.
  • Large Sample Size: Surveys often target a large and diverse group of participants.
  • Quantitative Data Analysis: Responses are quantified for statistical analysis.

Applications: Widely employed in social sciences, marketing, and public opinion research to understand trends and preferences.

Descriptive Research

Descriptive research seeks to portray an accurate profile of a situation or phenomenon. It focuses on answering the ‘what,’ ‘who,’ ‘where,’ and ‘when’ questions.

  • Observation and Data Collection: This involves observing and documenting without manipulating variables.
  • Objective Description: Aim to provide an unbiased and factual account of the subject.
  • Quantitative or Qualitative Data: T his can include both types of data, depending on the research focus.

Applications: Useful in situations where researchers want to understand and describe a phenomenon without altering it, common in social sciences and education.

Qualitative Research Methods

Qualitative research emphasizes exploring and understanding the depth and complexity of phenomena through non-numerical data.

A case study is an in-depth exploration of a particular person, group, event, or situation. It involves detailed, context-rich analysis.

  • Rich Data Collection: Uses various data sources, such as interviews, observations, and documents.
  • Contextual Understanding: Aims to understand the context and unique characteristics of the case.
  • Holistic Approach: Examines the case in its entirety.

Applications: Common in social sciences, psychology, and business to investigate complex and specific instances.

Ethnography

Ethnography involves immersing the researcher in the culture or community being studied to gain a deep understanding of their behaviours, beliefs, and practices.

  • Participant Observation: Researchers actively participate in the community or setting.
  • Holistic Perspective: Focuses on the interconnectedness of cultural elements.
  • Qualitative Data: In-depth narratives and descriptions are central to ethnographic studies.

Applications: Widely used in anthropology, sociology, and cultural studies to explore and document cultural practices.

Grounded Theory

Grounded theory aims to develop theories grounded in the data itself. It involves systematic data collection and analysis to construct theories from the ground up.

  • Constant Comparison: Data is continually compared and analyzed during the research process.
  • Inductive Reasoning: Theories emerge from the data rather than being imposed on it.
  • Iterative Process: The research design evolves as the study progresses.

Applications: Commonly applied in sociology, nursing, and management studies to generate theories from empirical data.

Research design is the structural framework that outlines the systematic process and plan for conducting a study. It serves as the blueprint, guiding researchers on how to collect, analyze, and interpret data.

Exploratory, Descriptive, And Explanatory Designs

Exploratory design.

Exploratory research design is employed when a researcher aims to explore a relatively unknown subject or gain insights into a complex phenomenon.

  • Flexibility: Allows for flexibility in data collection and analysis.
  • Open-Ended Questions: Uses open-ended questions to gather a broad range of information.
  • Preliminary Nature: Often used in the initial stages of research to formulate hypotheses.

Applications: Valuable in the early stages of investigation, especially when the researcher seeks a deeper understanding of a subject before formalizing research questions.

Descriptive Design

Descriptive research design focuses on portraying an accurate profile of a situation, group, or phenomenon.

  • Structured Data Collection: Involves systematic and structured data collection methods.
  • Objective Presentation: Aims to provide an unbiased and factual account of the subject.
  • Quantitative or Qualitative Data: Can incorporate both types of data, depending on the research objectives.

Applications: Widely used in social sciences, marketing, and educational research to provide detailed and objective descriptions.

Explanatory Design

Explanatory research design aims to identify the causes and effects of a phenomenon, explaining the ‘why’ and ‘how’ behind observed relationships.

  • Causal Relationships: Seeks to establish causal relationships between variables.
  • Controlled Variables : Often involves controlling certain variables to isolate causal factors.
  • Quantitative Analysis: Primarily relies on quantitative data analysis techniques.

Applications: Commonly employed in scientific studies and social sciences to delve into the underlying reasons behind observed patterns.

Cross-Sectional Vs. Longitudinal Designs

Cross-sectional design.

Cross-sectional designs collect data from participants at a single point in time.

  • Snapshot View: Provides a snapshot of a population at a specific moment.
  • Efficiency: More efficient in terms of time and resources.
  • Limited Temporal Insights: Offers limited insights into changes over time.

Applications: Suitable for studying characteristics or behaviours that are stable or not expected to change rapidly.

Longitudinal Design

Longitudinal designs involve the collection of data from the same participants over an extended period.

  • Temporal Sequence: Allows for the examination of changes over time.
  • Causality Assessment: Facilitates the assessment of cause-and-effect relationships.
  • Resource-Intensive: Requires more time and resources compared to cross-sectional designs.

Applications: Ideal for studying developmental processes, trends, or the impact of interventions over time.

Experimental Vs Non-experimental Designs

Experimental design.

Experimental designs involve manipulating variables under controlled conditions to observe the effect on another variable.

  • Causality Inference: Enables the inference of cause-and-effect relationships.
  • Quantitative Data: Primarily involves the collection and analysis of numerical data.

Applications: Commonly used in scientific studies, psychology, and medical research to establish causal relationships.

Non-Experimental Design

Non-experimental designs observe and describe phenomena without manipulating variables.

  • Natural Settings: Data is often collected in natural settings without intervention.
  • Descriptive or Correlational: Focuses on describing relationships or correlations between variables.
  • Quantitative or Qualitative Data: This can involve either type of data, depending on the research approach.

Applications: Suitable for studying complex phenomena in real-world settings where manipulation may not be ethical or feasible.

Effective data collection is fundamental to the success of any research endeavour. 

Designing Effective Surveys

Objective Design:

  • Clearly define the research objectives to guide the survey design.
  • Craft questions that align with the study’s goals and avoid ambiguity.

Structured Format:

  • Use a structured format with standardized questions for consistency.
  • Include a mix of closed-ended and open-ended questions for detailed insights.

Pilot Testing:

  • Conduct pilot tests to identify and rectify potential issues with survey design.
  • Ensure clarity, relevance, and appropriateness of questions.

Sampling Strategy:

  • Develop a robust sampling strategy to ensure a representative participant group.
  • Consider random sampling or stratified sampling based on the research goals.

Conducting Interviews

Establishing Rapport:

  • Build rapport with participants to create a comfortable and open environment.
  • Clearly communicate the purpose of the interview and the value of participants’ input.

Open-Ended Questions:

  • Frame open-ended questions to encourage detailed responses.
  • Allow participants to express their thoughts and perspectives freely.

Active Listening:

  • Practice active listening to understand areas and gather rich data.
  • Avoid interrupting and maintain a non-judgmental stance during the interview.

Ethical Considerations:

  • Obtain informed consent and assure participants of confidentiality.
  • Be transparent about the study’s purpose and potential implications.

Observation

1. participant observation.

Immersive Participation:

  • Actively immerse yourself in the setting or group being observed.
  • Develop a deep understanding of behaviours, interactions, and context.

Field Notes:

  • Maintain detailed and reflective field notes during observations.
  • Document observed patterns, unexpected events, and participant reactions.

Ethical Awareness:

  • Be conscious of ethical considerations, ensuring respect for participants.
  • Balance the role of observer and participant to minimize bias.

2. Non-participant Observation

Objective Observation:

  • Maintain a more detached and objective stance during non-participant observation.
  • Focus on recording behaviours, events, and patterns without direct involvement.

Data Reliability:

  • Enhance the reliability of data by reducing observer bias.
  • Develop clear observation protocols and guidelines.

Contextual Understanding:

  • Strive for a thorough understanding of the observed context.
  • Consider combining non-participant observation with other methods for triangulation.

Archival Research

1. using existing data.

Identifying Relevant Archives:

  • Locate and access archives relevant to the research topic.
  • Collaborate with institutions or repositories holding valuable data.

Data Verification:

  • Verify the accuracy and reliability of archived data.
  • Cross-reference with other sources to ensure data integrity.

Ethical Use:

  • Adhere to ethical guidelines when using existing data.
  • Respect copyright and intellectual property rights.

2. Challenges and Considerations

Incomplete or Inaccurate Archives:

  • Address the possibility of incomplete or inaccurate archival records.
  • Acknowledge limitations and uncertainties in the data.

Temporal Bias:

  • Recognize potential temporal biases in archived data.
  • Consider the historical context and changes that may impact interpretation.

Access Limitations:

  • Address potential limitations in accessing certain archives.
  • Seek alternative sources or collaborate with institutions to overcome barriers.

Common Challenges in Research Methodology

Conducting research is a complex and dynamic process, often accompanied by a myriad of challenges. Addressing these challenges is crucial to ensure the reliability and validity of research findings.

Sampling Issues

Sampling bias:.

  • The presence of sampling bias can lead to an unrepresentative sample, affecting the generalizability of findings.
  • Employ random sampling methods and ensure the inclusion of diverse participants to reduce bias.

Sample Size Determination:

  • Determining an appropriate sample size is a delicate balance. Too small a sample may lack statistical power, while an excessively large sample may strain resources.
  • Conduct a power analysis to determine the optimal sample size based on the research objectives and expected effect size.

Data Quality And Validity

Measurement error:.

  • Inaccuracies in measurement tools or data collection methods can introduce measurement errors, impacting the validity of results.
  • Pilot test instruments, calibrate equipment, and use standardized measures to enhance the reliability of data.

Construct Validity:

  • Ensuring that the chosen measures accurately capture the intended constructs is a persistent challenge.
  • Use established measurement instruments and employ multiple measures to assess the same construct for triangulation.

Time And Resource Constraints

Timeline pressures:.

  • Limited timeframes can compromise the depth and thoroughness of the research process.
  • Develop a realistic timeline, prioritize tasks, and communicate expectations with stakeholders to manage time constraints effectively.

Resource Availability:

  • Inadequate resources, whether financial or human, can impede the execution of research activities.
  • Seek external funding, collaborate with other researchers, and explore alternative methods that require fewer resources.

Managing Bias in Research

Selection bias:.

  • Selecting participants in a way that systematically skews the sample can introduce selection bias.
  • Employ randomization techniques, use stratified sampling, and transparently report participant recruitment methods.

Confirmation Bias:

  • Researchers may unintentionally favour information that confirms their preconceived beliefs or hypotheses.
  • Adopt a systematic and open-minded approach, use blinded study designs, and engage in peer review to mitigate confirmation bias.

Tips On How To Write A Research Methodology

Conducting successful research relies not only on the application of sound methodologies but also on strategic planning and effective collaboration. Here are some tips to enhance the success of your research methodology:

Tip 1. Clear Research Objectives

Well-defined research objectives guide the entire research process. Clearly articulate the purpose of your study, outlining specific research questions or hypotheses.

Tip 2. Comprehensive Literature Review

A thorough literature review provides a foundation for understanding existing knowledge and identifying gaps. Invest time in reviewing relevant literature to inform your research design and methodology.

Tip 3. Detailed Research Plan

A detailed plan serves as a roadmap, ensuring all aspects of the research are systematically addressed. Develop a detailed research plan outlining timelines, milestones, and tasks.

Tip 4. Ethical Considerations

Ethical practices are fundamental to maintaining the integrity of research. Address ethical considerations early, obtain necessary approvals, and ensure participant rights are safeguarded.

Tip 5. Stay Updated On Methodologies

Research methodologies evolve, and staying updated is essential for employing the most effective techniques. Engage in continuous learning by attending workshops, conferences, and reading recent publications.

Tip 6. Adaptability In Methods

Unforeseen challenges may arise during research, necessitating adaptability in methods. Be flexible and willing to modify your approach when needed, ensuring the integrity of the study.

Tip 7. Iterative Approach

Research is often an iterative process, and refining methods based on ongoing findings enhance the study’s robustness. Regularly review and refine your research design and methods as the study progresses.

Frequently Asked Questions

What is the research methodology.

Research methodology is the systematic process of planning, executing, and evaluating scientific investigation. It encompasses the techniques, tools, and procedures used to collect, analyze, and interpret data, ensuring the reliability and validity of research findings.

What are the methodologies in research?

Research methodologies include qualitative and quantitative approaches. Qualitative methods involve in-depth exploration of non-numerical data, while quantitative methods use statistical analysis to examine numerical data. Mixed methods combine both approaches for a comprehensive understanding of research questions.

How to write research methodology?

To write a research methodology, clearly outline the study’s design, data collection, and analysis procedures. Specify research tools, participants, and sampling methods. Justify choices and discuss limitations. Ensure clarity, coherence, and alignment with research objectives for a robust methodology section.

How to write the methodology section of a research paper?

In the methodology section of a research paper, describe the study’s design, data collection, and analysis methods. Detail procedures, tools, participants, and sampling. Justify choices, address ethical considerations, and explain how the methodology aligns with research objectives, ensuring clarity and rigour.

What is mixed research methodology?

Mixed research methodology combines both qualitative and quantitative research approaches within a single study. This approach aims to enhance the details and depth of research findings by providing a more comprehensive understanding of the research problem or question.

You May Also Like

Discover Canadian doctoral dissertation format: structure, formatting, and word limits. Check your university guidelines.

What is a manuscript? A manuscript is a written or typed document, often the original draft of a book or article, before publication, undergoing editing and revisions.

Academic integrity: a commitment to honesty and ethical conduct in learning. Upholding originality and proper citation are its cornerstones.

Ready to place an order?

USEFUL LINKS

Learning resources, company details.

  • How It Works

Automated page speed optimizations for fast site performance

  • Product overview
  • All features
  • App integrations

CAPABILITIES

  • project icon Project management
  • Project views
  • Custom fields
  • Status updates
  • goal icon Goals and reporting
  • Reporting dashboards
  • workflow icon Workflows and automation
  • portfolio icon Resource management
  • Time tracking
  • my-task icon Admin and security
  • Admin console
  • asana-intelligence icon Asana Intelligence
  • list icon Personal
  • premium icon Starter
  • briefcase icon Advanced
  • Goal management
  • Organizational planning
  • Campaign management
  • Creative production
  • Content calendars
  • Marketing strategic planning
  • Resource planning
  • Project intake
  • Product launches
  • Employee onboarding
  • View all uses arrow-right icon
  • Project plans
  • Team goals & objectives
  • Team continuity
  • Meeting agenda
  • View all templates arrow-right icon
  • Work management resources Discover best practices, watch webinars, get insights
  • What's new Learn about the latest and greatest from Asana
  • Customer stories See how the world's best organizations drive work innovation with Asana
  • Help Center Get lots of tips, tricks, and advice to get the most from Asana
  • Asana Academy Sign up for interactive courses and webinars to learn Asana
  • Developers Learn more about building apps on the Asana platform
  • Community programs Connect with and learn from Asana customers around the world
  • Events Find out about upcoming events near you
  • Partners Learn more about our partner programs
  • Support Need help? Contact the Asana support team
  • Asana for nonprofits Get more information on our nonprofit discount program, and apply.

Featured Reads

new methodology approach

  • Project management |
  • Project management methodologies: 12 po ...

Project management methodologies: 12 popular frameworks

Project management methodologies article banner image

Project management is an ever-evolving field that requires a number of approaches to be successful. Learning the most popular project management methodologies can help you become an industry expert. 

In order to be the best possible project manager , learn about each of these 12 frameworks to find the one that best fits your team’s needs. 

12 project management frameworks

What it is: The Agile project management methodology is one of the most common project management processes. But the reality is that Agile isn’t technically a methodology. Instead, it’s best defined as a project management principle. 

The basis of an Agile approach is:

Collaborative

Fast and effective

Iterative and data-backed

Values individuals over processes

When it comes to putting the Agile manifesto in place, teams often choose specific methodologies to use alongside Agile. These could include Scrum, Kanban, extreme programming, crystal, or even Scrumban . That's because connecting Agile methodology with a more detailed approach produces a well-rounded project management philosophy and a tangible plan for delivering great work. 

Who should use it: The Agile framework can be used for just about any team. This is because the principle behind it is rather universal. The real trick is deciding which methodology to use with it.

2. Waterfall

What it is: The waterfall model is also a very popular framework. But unlike Agile, waterfall is an actual methodology that is rather straightforward. The waterfall methodology , also known as software development life cycle (SDLC), is a linear process in which work cascades down (similar to a waterfall) and is organized in sequential order. 

Waterfall project management methodology

To achieve this approach, each work task is connected by a dependency. This means each task must be completed before the next task can be started. Not only does this ensure that work stays on track, but it also fosters clear communication throughout the process. 

While viewed as a traditional approach by some modern organizations, this method is good for creating a predictable and thoroughly planned-out project plan . 

Who should use it: Since the waterfall project management methodology is so detailed, it’s great for working on large projects with multiple different stakeholders. This is because there are clear steps throughout the project and dependencies that help track the work needed to reach goals. 

What it is: The Scrum methodology involves short “sprints” that are used to create a project cycle. These cycles span one to two weeks at a time and are organized with teams of 10 or less. This is different from the waterfall approach where individual tasks are broken down into dependencies.

Scum is unique for a variety of reasons, one being the use of a Scrum master. Or, in other words, a project manager that leads daily Scrum meetings, demos, sprints, and sprint retrospectives after each sprint is completed. These meetings aim to connect project stakeholders and ensure tasks are completed on time. 

While Scrum is technically a project management methodology in its own right, it’s most commonly associated with an Agile framework. This is because they share similar principles, such as collaboration and valuing individuals over processes. 

Who should use it: Teams that use an Agile approach should use, or at least try, the Scrum methodology as well. Since sprints are divided into small teams, this approach can work for both small and large teams. 

What it is: The Kanban methodology represents project backlogs using visual elements, specifically boards. This approach is used by Agile teams to better visualize workflows and project progress while decreasing the likelihood of bottlenecks. It’s also usually in the form of a software tool that allows you to change and drag boards seamlessly within projects, though it’s not a requirement. 

Since this method doesn’t have a defined process like others, many teams use it differently. The main concept to keep in mind is that Kanban aims to focus on the most important project tasks, keeping the overall framework simple.

Who should use it: Kanban boards are great for teams of all sizes and specifically remote-first teams. This is because the visual capabilities of Kanban boards help team members stay on track no matter where they are. 

5. Scrumban

What it is: As you may have guessed, Scrumban is a methodology that draws inspiration from both Scrum and Kanban frameworks. Some think of this as a hybrid approach that incorporates the best of each. 

Scrumban project management methodology

Scrumban uses a similar sprint cycle as Scrum but allows individual tasks to be pulled into the plan like Kanban. This allows the most important work to be completed and keeps project plans simple. Scrumban also uses Scrum meetings to enhance collaboration and keep goals top of mind. 

Who should use it: If you like the idea of breaking down a project into smaller tasks, but likewise want to keep it visually simple, Scrumban might be for you. It’s the perfect intersection of simplicity and clarity.  

What it is: PRINCE2 , otherwise known as PR ojects IN C ontrolled E nvironments, uses the overarching waterfall methodology to define stages within a project. It was initially created by the UK government for IT projects and still primarily suits large IT initiatives over the traditional product or market-focused projects. 

There are seven main principles of PRINCE2, which include:

Starting a project

Directing a project

Initiating a project

Controlling a project

Managing product delivery

Managing a stage boundary

Closing a project

These seven principles create a thorough project process and make for an effective enterprise project methodology altogether. It aims to define roles and back management. Not only that, but PRINCE2 can be used to streamline a ton of individual project management tasks, like controlling a stage, managing product delivery, and initiating and closing a project. 

Who should use it: Due to the particular nature of the PRINCE2 project management methodology, it’s best suited for large enterprise projects with a number of project stakeholders . Using it for small projects may create a longer and more complicated process than necessary. 

7. Six Sigma

What it is: Unlike the other PM methodologies, Six Sigma is used for quality management and is frequently described as a philosophy rather than a traditional methodology. It is often paired with either a lean methodology or Agile framework, otherwise known as lean Six Sigma and Agile Six Sigma. 

The main purpose of Six Sigma is to continuously improve processes and eliminate defects. This is achieved through continuous improvements by field experts to sustain, define, and control processes. 

To take this method one step further, you can use a Six Sigma DMAIC process, which creates a phased approach. These phases include:

Define: Create a project scope , business case , and initial stand-up meeting.

Measure: Collect data that helps inform improvement needs.

Analyze: Identify the root causes of problems. 

Improve: Solve the root causes found.

Control: Work to sustain the solutions for future projects. 

Who should use it: Six Sigma is best for large organizations, usually those with a few hundred employees or more. This is when the need to eliminate project waste starts to have a larger impact on your organization. 

8. Critical path method (CPM)

What it is: The critical path method works to identify and schedule critical tasks within a project. This includes creating task dependencies, tracking project goals and progress, prioritizing deliverables , and managing due dates—all of which are similar to a work breakdown structure .

The objective of this methodology is to properly manage successful projects at scale so that milestones and deliverables are mapped correctly. 

Who should use it: The critical path method is best for small and mid-size projects and teams. This is because large projects require many deliverables with multiple stakeholders and the CPM isn’t built to manage complex projects. 

9. Critical chain project management (CCPM)

What it is: The critical chain project management framework is closely related to the critical path methodology but is even more detailed, making it one of the most comprehensive options. 

Critical chain project management methodology

Along with implementing a work breakdown structure like CPM, CCPM includes specific time requirements for each task. This helps take task tracking one step further, making it clear when tasks are going over their allotted time. It also uses resource leveling which aims to resolve large workloads by distributing work across available resources. 

Not only do these help both productivity and efficiency, but they also help connect the work needed to be completed with project goals. Many project management tools even have visual elements to better visualize these goals, creating an organized road map for team members. 

Who should use it: CCPM is a great method for both small and large teams, but it mostly helps solve project efficiency problems . It can also be a great way to report work in progress to leadership. 

What it is: The lean project management methodology aims to cut waste and create a simple framework for project needs. This ultimately means doing more with less in order to maximize efficiency and teamwork. 

While reducing waste originally referred to a physical product (which dates back to the method used by Henry Ford and later by Toyota and Motorola), it now refers to wasteful practices. There are three Ms that represent this:

Muda (wastefulness): Practices that consume resources but don’t add value  

Mura (unevenness): Occurs through overproduction and leaves behind waste 

Muri (overburden): Occurs when there is too much strain on resources  

As a project manager, your job is to prevent the three Ms in order to better execute projects and streamline processes. This is similar to the approach of rational unified process (RUP), which also aims to reduce waste. The difference is that RUP aims to reduce development costs instead of wasteful practices. 

Who should use it: Since lean is all about reducing waste, it’s best suited for teams struggling with efficiency issues. While this will have a greater impact on large organizations, it can be helpful for project teams of all sizes. 

11. Project management institute’s PMBOK® Guide

What it is: While the PMI’s Project Management Body of Knowledge is associated as a project management methodology, it’s more closely related to a set of best practices that take into account various development processes. 

This framework focuses on implementing the five project management phases , all of which help easily manage a project from start to finish in a structured phase approach. The five phases include:

Project initiation

Project planning

Project executing

Project performance

Project closure

While this is a good foundation to keep in mind, the PMBOK® Guide isn’t necessarily as specific as other approaches. This means you’ll need to decide which tasks to complete in each phase. 

Who should use it: The PMBOK® Guide can be used on its own for small teams on standard projects, though it’s a good idea to pair it with a more detailed methodology (like CPM) for large teams handling complex projects. 

12. Extreme programming (XP)

What it is: As the name suggests, extreme programming is used for fast-paced projects with tight deadlines. The approach works by creating short development cycles with many releases. This makes for quick turnaround times and increased productivity . 

Extreme programming (XP) project management methodology

Extreme programming has a few core values, which include simplicity, communication, feedback, respect, and courage. It also includes a specific set of XP rules which includes all phases from planning to testing. 

Who should use it: Extreme programming can be used for individual projects with tight deadlines, most commonly with small to midsize teams. Since XP is a fast-paced method, it should be used lightly in order to prevent burnout . 

Choosing the right project management methodology for your team

There is no one-size-fits-all approach when it comes to project management methodologies. Each one offers unique principles to take a development project from an initial plan to final execution. 

The main aspects to keep in mind are the size of your team and how your team prefers to work. Here are some additional tips to consider:

Your industry : Consider if you’re in an industry that changes frequently. For example, a technology company would be an industry that is ever-evolving. This will affect project consistency and should be paired with either a flexible or stagnant methodology. 

Your project focus : Consider the objectives of your projects . Do you value people over efficiency? This will help pair you with a methodology that matches a similar objective. 

The complexity of projects : Are your projects on the more complex side, or are they usually straightforward? Some methods aren’t as good as others at organizing complex tasks, such a CCPM.

The specialization of roles : Consider how niche the roles within your team are. Can multiple team members alternate the same type of work, or do you need a method that focuses on specialization?

Your organization’s size : The size of your organization and team should be weighed heavily when deciding on a methodology. Methods like Kanban are universal for team size, while options like CPM are better suited for small teams. 

Whether your team members prefer a visual process like Kanban or a more traditional project management approach like the waterfall method, there’s an option for every type of team. To take a project management methodology one step further, consider a work management tool to better track and execute development projects. 

Choose the right project management methodology for your team

Methods to manage your projects mindfully

With the right project management methodology in place, you’ll be able to take your projects to new levels of efficiency and implement processes that are right for your team, your organization, and yourself.

Related resources

new methodology approach

What are story points? Six easy steps to estimate work in Agile

new methodology approach

What is a flowchart? Symbols and types explained

new methodology approach

How to choose project management software for your team

new methodology approach

7 steps to complete a social media audit (with template)

  • Privacy Policy

Research Method

Home » Research Methodology – Types, Examples and writing Guide

Research Methodology – Types, Examples and writing Guide

Table of Contents

Research Methodology

Research Methodology

Definition:

Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.

Structure of Research Methodology

Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:

I. Introduction

  • Provide an overview of the research problem and the need for a research methodology section
  • Outline the main research questions and objectives

II. Research Design

  • Explain the research design chosen and why it is appropriate for the research question(s) and objectives
  • Discuss any alternative research designs considered and why they were not chosen
  • Describe the research setting and participants (if applicable)

III. Data Collection Methods

  • Describe the methods used to collect data (e.g., surveys, interviews, observations)
  • Explain how the data collection methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or instruments used for data collection

IV. Data Analysis Methods

  • Describe the methods used to analyze the data (e.g., statistical analysis, content analysis )
  • Explain how the data analysis methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or software used for data analysis

V. Ethical Considerations

  • Discuss any ethical issues that may arise from the research and how they were addressed
  • Explain how informed consent was obtained (if applicable)
  • Detail any measures taken to ensure confidentiality and anonymity

VI. Limitations

  • Identify any potential limitations of the research methodology and how they may impact the results and conclusions

VII. Conclusion

  • Summarize the key aspects of the research methodology section
  • Explain how the research methodology addresses the research question(s) and objectives

Research Methodology Types

Types of Research Methodology are as follows:

Quantitative Research Methodology

This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.

Qualitative Research Methodology

This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

Mixed-Methods Research Methodology

This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.

Case Study Research Methodology

This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.

Action Research Methodology

This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.

Experimental Research Methodology

This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.

Survey Research Methodology

This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.

Grounded Theory Research Methodology

This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.

Research Methodology Example

An Example of Research Methodology could be the following:

Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults

Introduction:

The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.

Research Design:

The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.

Participants:

Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.

Intervention :

The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.

Data Collection:

Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.

Data Analysis:

Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.

Ethical Considerations:

This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.

Data Management:

All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.

Limitations:

One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.

Conclusion:

This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.

How to Write Research Methodology

Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:

  • Start by explaining your research question: Begin the methodology section by restating your research question and explaining why it’s important. This helps readers understand the purpose of your research and the rationale behind your methods.
  • Describe your research design: Explain the overall approach you used to conduct research. This could be a qualitative or quantitative research design, experimental or non-experimental, case study or survey, etc. Discuss the advantages and limitations of the chosen design.
  • Discuss your sample: Describe the participants or subjects you included in your study. Include details such as their demographics, sampling method, sample size, and any exclusion criteria used.
  • Describe your data collection methods : Explain how you collected data from your participants. This could include surveys, interviews, observations, questionnaires, or experiments. Include details on how you obtained informed consent, how you administered the tools, and how you minimized the risk of bias.
  • Explain your data analysis techniques: Describe the methods you used to analyze the data you collected. This could include statistical analysis, content analysis, thematic analysis, or discourse analysis. Explain how you dealt with missing data, outliers, and any other issues that arose during the analysis.
  • Discuss the validity and reliability of your research : Explain how you ensured the validity and reliability of your study. This could include measures such as triangulation, member checking, peer review, or inter-coder reliability.
  • Acknowledge any limitations of your research: Discuss any limitations of your study, including any potential threats to validity or generalizability. This helps readers understand the scope of your findings and how they might apply to other contexts.
  • Provide a summary: End the methodology section by summarizing the methods and techniques you used to conduct your research. This provides a clear overview of your research methodology and helps readers understand the process you followed to arrive at your findings.

When to Write Research Methodology

Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.

The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.

Applications of Research Methodology

Here are some of the applications of research methodology:

  • To identify the research problem: Research methodology is used to identify the research problem, which is the first step in conducting any research.
  • To design the research: Research methodology helps in designing the research by selecting the appropriate research method, research design, and sampling technique.
  • To collect data: Research methodology provides a systematic approach to collect data from primary and secondary sources.
  • To analyze data: Research methodology helps in analyzing the collected data using various statistical and non-statistical techniques.
  • To test hypotheses: Research methodology provides a framework for testing hypotheses and drawing conclusions based on the analysis of data.
  • To generalize findings: Research methodology helps in generalizing the findings of the research to the target population.
  • To develop theories : Research methodology is used to develop new theories and modify existing theories based on the findings of the research.
  • To evaluate programs and policies : Research methodology is used to evaluate the effectiveness of programs and policies by collecting data and analyzing it.
  • To improve decision-making: Research methodology helps in making informed decisions by providing reliable and valid data.

Purpose of Research Methodology

Research methodology serves several important purposes, including:

  • To guide the research process: Research methodology provides a systematic framework for conducting research. It helps researchers to plan their research, define their research questions, and select appropriate methods and techniques for collecting and analyzing data.
  • To ensure research quality: Research methodology helps researchers to ensure that their research is rigorous, reliable, and valid. It provides guidelines for minimizing bias and error in data collection and analysis, and for ensuring that research findings are accurate and trustworthy.
  • To replicate research: Research methodology provides a clear and detailed account of the research process, making it possible for other researchers to replicate the study and verify its findings.
  • To advance knowledge: Research methodology enables researchers to generate new knowledge and to contribute to the body of knowledge in their field. It provides a means for testing hypotheses, exploring new ideas, and discovering new insights.
  • To inform decision-making: Research methodology provides evidence-based information that can inform policy and decision-making in a variety of fields, including medicine, public health, education, and business.

Advantages of Research Methodology

Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:

  • Systematic and structured approach : Research methodology provides a systematic and structured approach to conducting research, which ensures that the research is conducted in a rigorous and comprehensive manner.
  • Objectivity : Research methodology aims to ensure objectivity in the research process, which means that the research findings are based on evidence and not influenced by personal bias or subjective opinions.
  • Replicability : Research methodology ensures that research can be replicated by other researchers, which is essential for validating research findings and ensuring their accuracy.
  • Reliability : Research methodology aims to ensure that the research findings are reliable, which means that they are consistent and can be depended upon.
  • Validity : Research methodology ensures that the research findings are valid, which means that they accurately reflect the research question or hypothesis being tested.
  • Efficiency : Research methodology provides a structured and efficient way of conducting research, which helps to save time and resources.
  • Flexibility : Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, data availability, and other relevant factors.
  • Scope for innovation: Research methodology provides scope for innovation and creativity in designing research studies and developing new research techniques.

Research Methodology Vs Research Methods

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

new methodology approach

New approach enhances accelerator's capability to uncover clues from supernovae in lunar dust

R esearchers at the China Institute of Atomic Energy (CIAE) have significantly enhanced the method of detecting iron-60 ( 60 Fe), a rare isotope found in lunar samples, using the HI-13 tandem accelerator. This achievement paves the way for detecting 60 Fe in lunar samples for a deeper understanding of cosmic events like supernovae that occurred millions of years ago.

The findings are published in the journal Nuclear Science and Techniques .

The study, led by Bing Guo, utilized a refined accelerator mass spectrometry (AMS) technique to detect 60 Fe, a rare isotope produced by supernovae and found in samples returned from the moon. The enhanced AMS system, equipped with a Wien filter, successfully identified 60 Fe in simulation samples with sensitivity levels previously unachievable. This finding demonstrates a detection sensitivity better than 4.3 × 10 −14 and potentially reaching 2.5 × 10 −15 in optimal conditions.

For decades, the challenge of detecting low-abundance isotopes like 60 Fe in lunar samples has stumped scientists due to the isotope's scarcity and the presence of interfering elements. The traditional methods fell short in sensitivity. The latest modifications at the CIAE's HI-13 tandem accelerator facility represent a significant step forward.

Guo said, "Our team agreed that the only way to track historical supernovae events accurately was by pushing the boundaries of what our equipment could do. The installation of the Wien filter could be a game-changer for us."

The findings of this research extend beyond the academic realm, offering insights into the processes that shape our universe. The ability to measure minute quantities of 60 Fe on the moon provides a direct link to studying past supernovae events that have occurred nearby. These discoveries have implications for astrophysics, offering a new lens through which to view the history and evolution of stars.

Looking ahead, the CIAE research team plans to refine their techniques further to improve the sensitivity of their measurements. Enhancements in ion source and beam transmission efficiencies are expected to push detection capabilities even further.

"Our next goal is to optimize our entire AMS system to reach even lower detection limits. Every bit of increased sensitivity opens up a universe of possibilities," explained Guo.

The successful development of this enhanced AMS method contributes to both lunar research and the study of interstellar phenomena. As researchers continue to refine this technology, our understanding of the universe's history grows deeper, proving once again that our journey through the cosmos is far from over.

More information: Yang Zhang et al, Stepped-up development of accelerator mass spectrometry method for the detection of 60Fe with the HI-13 tandem accelerator, Nuclear Science and Techniques (2024). DOI: 10.1007/s41365-024-01453-x

Provided by Nuclear Science and Techniques

The target chamber was equipped with a collimator, a target holder, and a Faraday cup. Si3N4 foil degraders were installed on the target holder. The Q3D magnetic spectrograph is able to rotate around the target chamber. Credit: China Institute of Atomic Energy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Toxicol

Logo of ftox

Use of new approach methodologies (NAMs) to meet regulatory requirements for the assessment of industrial chemicals and pesticides for effects on human health

Andreas o. stucki.

1 PETA Science Consortium International e.V., Stuttgart, Germany

Tara S. Barton-Maclaren

2 Safe Environments Directorate, Healthy Environments and Consumer Safety Branch, Health Canada, Ottawa, ON, Canada

Yadvinder Bhuller

3 Pest Management Regulatory Agency, Health Canada, Ottawa, ON, Canada

Joseph E. Henriquez

4 Corteva Agriscience, Indianapolis, IN, United States

Tala R. Henry

5 Office of Pollution Prevention and Toxics, US Environmental Protection Agency, Washington, DC, United States

Carole Hirn

6 Scientific and Regulatory Affairs, JT International SA, Geneva, Switzerland

Jacqueline Miller-Holt

Edith g. nagy.

7 Bergeson & Campbell PC, Washington, DC, United States

Monique M. Perron

8 Office of Pesticide Programs, US Environmental Protection Agency, Washington, DC, United States

Deborah E. Ratzlaff

Todd j. stedeford, amy j. clippinger.

Erin H. Hill , Institute for In Vitro Sciences, Inc. (IIVS), United States

Natalie Burden , National Centre for the Replacement Refinement and Reduction of Animals in Research, United Kingdom

New approach methodologies (NAMs) are increasingly being used for regulatory decision making by agencies worldwide because of their potential to reliably and efficiently produce information that is fit for purpose while reducing animal use. This article summarizes the ability to use NAMs for the assessment of human health effects of industrial chemicals and pesticides within the United States, Canada, and European Union regulatory frameworks. While all regulations include some flexibility to allow for the use of NAMs, the implementation of this flexibility varies across product type and regulatory scheme. This article provides an overview of various agencies’ guidelines and strategic plans on the use of NAMs, and specific examples of the successful application of NAMs to meet regulatory requirements. It also summarizes intra- and inter-agency collaborations that strengthen scientific, regulatory, and public confidence in NAMs, thereby fostering their global use as reliable and relevant tools for toxicological evaluations. Ultimately, understanding the current regulatory landscape helps inform the scientific community on the steps needed to further advance timely uptake of approaches that best protect human health and the environment.

1 Introduction

Regulatory agencies are tasked with ensuring protection of human health and the environment, and implementing various processes for achieving this goal. Legal frameworks that do not require upfront toxicological testing have relied heavily on chemical evaluations using analogue read across and grouping based on chemical categories, while others with upfront testing requirements have relied on prescribed checklists of toxicity tests, often using animals to fulfill the required testing. However, scientific advancements have led to investments in the development, implementation, and acceptance of reliable and relevant new approach methodologies (NAMs). NAMs are defined as any technology, methodology, approach, or combination that can provide information on chemical hazard and risk assessment without the use of animals, including in silico , in chemico , in vitro , and ex vivo approaches ( ECHA, 2016b ; EPA, 2018d ). NAMs are not necessarily newly developed methods, rather, it is their application to regulatory decision making or replacement of a conventional testing requirement that is new.

Regulatory agencies worldwide have recognized the importance of the timely uptake of fit for purpose NAMs for hazard and risk assessment and are introducing flexible, efficient, and scientifically sound processes to establish confidence in the use of NAMs for regulatory decision-making ( van der Zalm et al., 2022 ; Ingenbleek et al., 2020 ). The use of NAMs has been prioritized because of their ability to efficiently generate information that, once established to be as or more reliable and relevant than the conventional testing requirement, may be used to make regulatory decisions that protect human health. NAMs can mimic human biology and provide mechanistic information about how a chemical may cause toxicity in humans. They can also be used to inform population variability, for example, by rapidly identifying susceptible subpopulations from potential exposures in fence line communities or workers, and by allowing for the consideration of individualized health risks and the generation of data tailored to people with pre-existing conditions or those more sensitive to certain chemicals ( EPA, 2020e ).

This article describes opportunities for and examples of the use of NAMs in regulatory submissions for industrial chemicals and pesticides in the United States (US), Canada, and the European Union (EU). For industrial chemicals, it includes the US Environmental Protection Agency (EPA)’s Office of Pollution Prevention and Toxics (OPPT), the US Consumer Products Safety Commission (CPSC), Health Canada (HC)’s Healthy Environments and Consumer Safety Branch (HECSB), and the European Chemicals Agency (ECHA). For pesticides and plant protection products (PPP), it highlights the EPA’s Office of Pesticide Programs (OPP), HC’s Pest Management Regulatory Agency (PMRA), and the European Food Safety Authority (EFSA). This article also provides examples of collaborations, across sectors and borders, to build scientific, regulatory, and public confidence in the use of NAMs for the protection of human health, and to reach the ultimate goal of global acceptance. Tables 1 , ​ ,2 2 summarize some of the guidance, strategic plans, and other helpful documentation related to the implementation of NAMs. While this article addresses the assessment of human health effects of industrial chemicals and pesticides in the US, Canada, and the EU, similar collaborative efforts and opportunities to use NAMs in regulatory submissions exist in other sectors and countries. Furthermore, many of the discussed actions and efforts also likely apply to other types of chemicals and to ecotoxicological effects.

US, Canada, and EU: industrial chemicals and household products.

CEPA, Canadian Environmental Protection Act; CFR, Code of Federal Regulations; CPSC, Consumer Products Safety Commission; ECHA, European Chemicals Agency; EPA OPPT, Environmental Protection Agency Office of Pollution Prevention and Toxics; HC HECSB, Health Canada Healthy Environments and Consumer Safety Branch; NAM, new approach methodologies; TSCA, Toxic Substances Control Act; WoE, weight-of-evidence.

US, Canada, and EU: pesticides and plant protection products.

EFSA, European Food Safety Authority; EPA OPP, Environmental Protection Agency Office of Pesticide Programs; FIFRA, Federal Insecticide, Fungicide, and Rodenticide Act; HC PMRA, Health Canada Pest Management Regulatory Agency; NAM, new approach methodologies. ReCAAP, Rethinking Chronic toxicity and Carcinogenicity Assessment for Agrochemicals Project.

2 Overarching activities to advance the implementation of NAMs

2.1 international collaboration.

The Organisation for Economic Co-operation and Development (OECD) publishes guidelines for the assessment of chemical effects on human health and the environment. Under the mutual acceptance of data (MAD) agreement among the 38 OECD member countries, which aims to reduce duplicate testing, “…data generated in the testing of chemicals in an OECD member country in accordance with OECD Test Guidelines (TG) and OECD Principles of Good Laboratory Practice (GLP), shall be accepted in other member countries” ( OECD, 2019 ). A portion of the nearly 100 OECD test guidelines describe in chemico, in vitro, or ex vivo methods that are accepted by certain regulatory agencies for the testing of various types of chemicals. At their discretion, agencies can decide which OECD test guidelines to require and whether to accept non-OECD guideline methods ( OECD, 2019 ). Building toward regulatory implementation of non-guideline methods, parallel OECD efforts are working to advance the development of best practices, guidance, data integration and evaluation frameworks such as Integrated Approaches to Testing and Assessment (IATA) and Adverse Outcome Pathways (AOPs).

The International Cooperation on Alternative Test Methods (ICATM) was originally established in 2009 by the US Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM), HC, the EU Reference Laboratory for alternatives to animal testing (EURL ECVAM), and the Japanese Center for the Validation of Alternative Methods (JaCVAM) to facilitate cooperation among national validation organizations. Since its establishment, Korea (KoCVAM) has signed the agreement and China, Brazil (BraCVAM), and Taiwan participate in ICATM activities. In 2019, Canada established the Canadian Centre for the Validation of Alternative Methods (CaCVAM). Each group works in-country and collaboratively to advance NAMs. For example, the Tracking System for Alternative Methods (TSAR), an overview of non-animal methods that have been proposed for regulatory safety or efficacy testing of chemicals or biological agents, was established and provided by EURL ECVAM ( EURL ECVAM, n.d. ).

In 2016, ECHA organized a workshop on NAMs in Regulatory Science, which was attended by 300 stakeholders to discuss the use of NAMs for regulatory decision making ( ECHA, 2016b ). Since 2016, EPA, HC, and ECHA have held workshops to discuss the development and application of NAMs for chemical assessment as part of an international government-to-government initiative titled “Accelerating the Pace of Chemical Risk Assessment” (APCRA) ( EPA, 2021a ). EPA and HC further collaborated through the North American Free Trade Agreement (NAFTA; in 2020, NAFTA was replaced by the US-Mexico-Canada Agreement (USMCA)) Technical Working Group (TWG) on Pesticides and through the Canada-US Regulatory Co-operation Council (RCC). The RCC was a regulatory partnership between the pesticides regulating department and offices of HC and EPA that has facilitated the alignment of both countries’ regulatory approaches, while advancing efforts to reduce and replace animal tests ( ITA, n.d. ; HC, 2020 ). The NAFTA TWG on Pesticides and the RCC included specific work plans and priority areas along with accountability for deliverables ( NAFTA TWG, 2016 ).

The development and implementation of NAMs within regulatory agencies relies heavily on collaboration with a variety of stakeholders, including other offices and departments within the same agency, other national and international agencies, as well as industry representatives, method developers, academics, and non-profit/non-governmental organizations. For example, within EPA, there is substantial cross-talk between OPP and OPPT (both of which are a part of the Office of Chemical Safety and Pollution Prevention (OCSPP)) as well as the Office of Research and Development (ORD). Agencies also consult with external peer-review panels, such as science advisory boards or committees, which provide independent scientific expertise on various topics. The exchange with external stakeholders provides diverse perspectives and experiences with different NAMs. Several of these collaborations have led to journal publications, presentations at national and international meetings, and webinars. For example, since 2018, EPA has partnered with PETA Science Consortium International e.V. and the Physicians Committee for Responsible Medicine to host a webinar series on the “Use of New Approach Methodologies (NAMs) in Risk Assessment” which brings together expert speakers and attendees from around the world to discuss the implementation of NAMs ( PSCI, n.d. ). EPA’s OCSPP and ORD also held conferences on the state of the science for using NAMs in 2019 and 2020 and are currently planning the next conference for October 2022 ( EPA, 2019a ; EPA, 2020b ).

2.2 National roadmaps or work plans to guide and facilitate the implementation of NAMs

2.2.1 united states.

Several US agencies have roadmaps or work plans to guide and facilitate the implementation of NAMs for testing industrial chemicals or pesticides. For example , following publication of the EPA-commissioned National Resource Council (NRC) report titled “Toxicity Testing in the 21st Century: A Vision and A Strategy” ( NRC, 2007 ), EPA released a strategic plan that provided a framework for implementing the NRC’s vision, which incorporates new approaches into toxicity testing and risk assessment practices with less reliance on conventional apical approaches ( EPA, 2009 ). Furthermore, in June 2020, EPA’s OCSPP and ORD published a NAM Work Plan (updated in December 2021) that describes primary objectives and strategies for reducing animal testing through the use of NAMs while ensuring protection of human health and the environment ( EPA, 2021e ). It highlights the importance of communicating, collaborating, providing training on NAMs, establishing confidence in NAMs, and developing metrics for assessing progress.

In 2018, the 16 US federal agencies that comprised ICCVAM (including EPA and CPSC) published a strategic roadmap to serve as a guide for agencies and stakeholders seeking to adopt NAMs for chemical safety and risk assessments ( ICCVAM, 2018 ). The ICCVAM strategic roadmap emphasizes three main components: 1) connecting agency and industry end users with NAM developers to ensure the needs of the end user will be met; 2) using efficient, flexible, and robust practices to establish confidence in NAMs and reducing reliance on using animal data to define NAM performance; and 3) encouraging the adoption and use of NAMs by federal agencies and regulated industries. A list of NAMs accepted by US agencies can be found on the website of the US National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM), which supports ICCVAM’s work ( NICEATM, 2021 ).

2.2.2 Canada

The PMRA’s 2016–2021 strategic plan notes how rapidly the regulatory environment is evolving through innovations in science and puts an onus on the Agency to evolve accordingly ( HC, 2016d ). The strategic plan includes drivers for evolution, the importance of public confidence, the vision, mission, and key principles of scientific excellence, innovation, openness and transparency, and organizational and workforce excellence. The plan further mentions strategic enablers, which include building upon PMRA’s success in establishing and maintaining effective partnerships with provinces, territories, and other stakeholders both domestically and internationally.

2.2.3 European Union

The EU is a political and economic union of 27 European countries (Member States) and its operation is guaranteed through various legal instruments. Unlike regulations and decisions that apply automatically and uniformly to all countries as soon as they enter into force, directives require Member States to achieve a certain result by transposing them into national law. In 2010, Directive 2010/63/EU on the protection of animals used for scientific purposes ( EU, 2010 ) was adopted to eliminate disparities between laws, regulations, and administrative provisions of the Member States regarding the protection of animals used for experimental and other scientific purposes. Article 4 states that “wherever possible, a scientifically satisfactory method or testing strategy, not entailing the use of live animals, shall be used instead of a procedure,” which applies to all research purposes including regulatory toxicity testing ( EU, 2010 ). Further, the directive lays the foundation for retrospective analyses of animal experiments, mutual acceptance of data, as well as the European Commission and Member States’ contribution to the development and validation of NAMs.

In October 2020, the EU Chemicals Strategy for Sustainability (CSS) Towards a Toxic-Free Environment was published ( EC, 2020 ). It identified a need to innovate safety testing and chemical risk assessment to reduce dependency on animal testing while improving the quality, efficiency, and speed of chemical hazard and risk assessments. However, fulfilling its additional information requirements will more likely lead to an increase in animals used. Also, it is currently unknown whether the implementation of the CSS will open opportunities for the application of more NAMs.

3 Industrial chemicals

3.1 united states.

In the US, industrial chemicals are subject to regulation under the Toxic Substances Control Act (TSCA). TSCA was originally signed into law (15 US Code [USC] §2601 et seq. ) on 11 October 1976 with the intent “[t]o regulate commerce and protect human health and the environment by requiring testing and necessary use restrictions on certain chemical substances, and for other purposes” (Pub. L. 94-469, Oct. 11, 1976). TSCA was significantly amended in 2016 (Pub. L. 114-182, 22 June 2016). EPA is responsible for implementing and administering TSCA (see 15 USC §2601(c)) and OPPT, within EPA’s OCSPP, carries out much of that work.

TSCA provides EPA the authority to regulate new and existing chemical substances under Sections 5 and 6 of TSCA, respectively. Existing chemical substances are those on the TSCA Inventory, either those that were in commerce prior to the enactment of TSCA and grandfathered in, or those that OPPT evaluated as new chemical substances and were subsequently introduced into commerce. Entities that wish to introduce a new chemical substance or an existing chemical substance with a significant new use into commerce must submit a notification to OPPT ( i.e. , pre-manufacture notice (PMN) or significant new use notice (SNUN)) or an appropriate exemption application, where an application is required for the exemption (e.g., low volume exemption), prior to manufacturing, including importing, the chemical substance.

Prior to the 2016 Amendments, when entities submitted a new chemical notification, no specific action by EPA was required. If EPA did not take regulatory action on the new chemical substance, the entity was allowed to manufacture the chemical substance at the expiration of the applicable review period (e.g., 90 days for a PMN). For existing chemicals, much of EPA’s TSCA activity was focused on data collection, including through section 8 rules and issuing test rules on chemical substances, including those identified by EPA’s interagency testing committee (ITC). The ITC was established under Section 4(e) of TSCA and was charged with identifying and recommending to the EPA Administrator chemical substances or mixtures that should be tested pursuant to Section 4(a) of TSCA to determine their hazard to human health or the environment. Although allowed by TSCA, EPA’s ability to regulate and restrict the use of existing chemical substances under Section 6 of TSCA was significantly impaired following a 1991 ruling by the US Court of Appeals for the Fifth Circuit ( Corrosion Proof Fittings vs. EPA , 974 F.2d 1201), which vacated much of EPA’s TSCA Section 6 rule to ban asbestos, a rule that EPA had first announced as an advanced notice of proposed rulemaking in 1979.

The above issues with TSCA—namely new chemical substances being automatically introduced into commerce if the “clock ran out” and EPA’s limited regulation of existing chemical substances under Section 6 of TSCA—garnered Congressional attention, which culminated on 22 June 2016. On that date, then-President Obama signed the Frank R. Lautenberg Chemical Safety for the 21st Century Act into law, thereby amending TSCA (Pub.L. 114-182, 2016). The TSCA amendments placed new requirements on EPA, including requirements to review and publish risk determinations on new chemical substances, prioritize existing chemical substances as either high- or low-priority substances, and perform risk evaluations on those chemical substances identified as high-priority substances. The TSCA amendments also included new requirements for EPA to comply with specific scientific standards for best available science and weight of the scientific evidence (WoE) under Sections 26(h)-(i) of TSCA when carrying out Sections 4, 5, and 6; a new requirement to reduce testing on vertebrate animals under Section 4(h) of TSCA; and a provision giving EPA the authority to require testing on existing chemical substances by order, rather than by rule, 1 under Section 4(a)(1) and (2) of TSCA.

The discussion that follows is focused on EPA’s authority under Section 4(h) to reduce testing on vertebrate animals, EPA’s use of this authority for new and existing chemical substances, and voluntary initiatives by the regulated community that have advanced the understanding and use of NAMs.

3.1.1 General requirements

TSCA does not contain upfront vertebrate toxicity testing requirements, which allows flexibility for the adoption of NAMs. Since the enactment of the TSCA amendments, EPA has used its authority to order testing on existing chemical substances, while meeting its requirements under Section 4(h) of TSCA ( EPA, 2022c ). Section 4(h) includes three primary provisions: (1) the aforementioned general requirements placed on EPA for reducing and replacing the use of vertebrate animals; (2) the requirements on EPA to promote the development and incorporation of alternative testing methods, including through the development of a strategic plan and a (non-exhaustive) list of NAMs identified by the EPA Administrator; and (3) the requirements on the regulated community to consider non-vertebrate testing methods when performing voluntary testing when EPA has identified an alternative test method or strategy to develop such information.

3.1.2 Regulatory flexibility

There are several sections of TSCA and the implementing regulations where EPA may use NAMs for informing its science and risk management decisions under TSCA. Data generated using NAMs may trigger reporting requirements on the regulated community. For example, under Section 8(e) of TSCA, it is possible that results generated using NAMs would trigger a reporting obligation for substantial risk for instance, if the data meet the requirements under one of EPA’s policies, such as in vitro skin sensitization data. In its “Strategic Plan to Promote the Development and Implementation of Alternative Test Methods Within the TSCA Program,” OPPT lists criteria that provide a starting point for considering the scientific reliability and relevance of NAMs ( EPA, 2018d ); however, it has yet to issue official guidance to the regulated community on its interpretation of the criteria for accepting NAMs, as meeting the scientific standards under Section 26(h) of TSCA. In addition, while OPPT has yet to issue official guidance on the criteria it uses to identify NAMs for inclusion on the list of methods approved by the EPA Administrator, the agency has presented a proposed nomination form, which provides some insight on EPA’s considerations ( Simmons and Scarano, 2020 ).

3.1.3 Implementation of NAMs

OPPT’s activities to implement NAMs have included issuing a “Strategic Plan to Promote the Development and Implementation of Alternative Test Methods Within the TSCA Program” ( EPA, 2018d ), establishing a list of approved NAMs ( EPA, 2018c ; EPA, 2019b ; EPA, 2021d ), and developing a draft policy allowing the use of NAMs for evaluating skin sensitization ( EPA, 2018b ). The latter is based on EPA’s participation in the development of the OECD guideline for Defined Approaches on Skin Sensitisation ( OECD, 2021a ). EPA has also performed significant outreach and collaboration to advance its understanding of NAMs, as well as educate the interested community about these technologies.

In March 2022, OPPT and ORD presented the TSCA new chemicals collaborative research effort for public comments ( EPA, 2022b ). This multi-year research action plan to bring innovative science to the review of new chemicals under TSCA includes: 1) refining chemical categories for read-across; 2) developing and expanding databases containing TSCA chemical information; 3) developing and refining Quantitative Structure-Activity Relationship (QSAR) and other predictive models; 4) exploring ways to apply NAMs in risk assessment; and 5) developing a decision support tool that will transparently integrate all data streams into a final risk assessment.

3.1.3.1 Examples of NAM application

Already prior to the 2016 amendments to TSCA, EPA had established numerous methods for assessing chemical substances. For example, EPA has been using structure-activity relationships (SAR) for assessing the potential of new chemical substances to cause harm to aquatic organisms and an expert system to estimate potential for carcinogenicity since the 1980s ( EPA, 1994 ).

In early 2021, OPPT issued test orders on nine existing chemical substances ( EPA, 2022c ). For each of the substances, OPPT ordered dermal absorption testing using an in vitro method validated by the OECD ( OECD, 2004 ) instead of animal testing. After consideration of existing scientific information, EPA determined that the in vitro method, which is included on its list of NAMs, could be used. While EPA required the in vitro testing on both human and animal skin, a report has since been published analyzing 30 agrochemical formulations, which supports the use of in vitro assays using human skin for human health risk assessment because they are as or more protective and are directly relevant to the species of interest ( Allen et al., 2021 ; EPA, 2021f ). In reviewing test plans or test data provided to be considered in lieu of the ordered testing, EPA consulted with the authors of Allen et al. (2021) and subsequently determined that it would be acceptable for the in vitro testing to be conducted on human skin only for the chemicals subject to these particular orders.

The interested community has also been actively developing robust NAMs that can be used for regulatory decision making. For example, an entity performed voluntary in chemico testing on a polymeric substance that OPPT had identified as a potential hazard. The substance was classified as a poorly soluble, low-toxicity substance that, if inhaled, may lead to adverse effects stemming from lung overload. OPPT issued a significant new use rule (SNUR) on this substance, which required any entity to notify EPA (submission of a SNUN) if the polymer is manufactured, processed, or used as a respirable particle (i.e., <10 μm) ( EPA, 2019c ). The SNUR listed potentially useful information for inclusion in a SNUN, which consisted of a 90-day subchronic inhalation toxicity study in rats. However, the entity voluntarily undertook an in chemico test in lieu of the in vivo toxicity study. The in chemico test showed the daily dissolution rate of the polymer in simulated epithelial lung fluid exceeded the anticipated daily exposure concentrations and was, therefore, not a hazard concern from lung overload. After evaluating these data, OPPT agreed with the results and issued a final rule revoking the SNUR ( EPA, 2020h ). These data were subsequently published in the peer-reviewed literature ( Ladics et al., 2021 ).

3.1.4 Consumer products

In addition to the regulation of individual chemical ingredients of household products under TSCA, the Federal Hazardous Substances Act (FHSA) requires appropriate cautionary labeling on certain hazardous household products to alert consumers to the potential hazard(s) that the products may present (15 USC §1261 et seq. ). However, the FHSA does not require manufacturers to perform any specific toxicological tests to assess potential hazards (e.g . , systemic toxicity, corrosivity, sensitization, or irritation). CPSC has the authority with administering FHSA. It issued guidance on the use of NAMs in 2021 ( CPSC, 2022 ). This document lays out what factors CPSC staff will use when evaluating NAMs, IATA, and any submitted data being used to support FHSA labeling determinations. CPSCS, 2012 Animal Testing Policy (16 Code of Federal Regulations [CFR] Part 1500) strongly encourages manufacturers to find alternatives to animal testing for assessing household products.

The Canadian Environmental Protection Act (CEPA, Statutes of Canada [SC] 1999, c.33) provides the legislative framework for industrial substances, including new chemical substances and those that are currently on the Canadian market (i.e., existing substances on the Domestic Substances List [DSL]), for the protection of the environment, for the well-being of Canadians, and to contribute to sustainable development. The Safe Environments Directorate in the HECSB of Health Canada and Environment and Climate Change Canada are jointly responsible for the regulation of industrial substances under the authority of CEPA.

Existing and new substances have different legal requirements under CEPA. Accordingly, based on respective program areas, the requirements for and use of traditional and NAMs data are considered in various decision contexts including screening, prioritization, and informing risk assessment decisions. Risk assessments consider various types and sources of information, as required or available for new or existing substances respectively, including physico-chemical properties, inherent hazard, biological characteristics, release scenarios, and routes of exposure to determine whether a substance is or may become harmful according to the criteria set out in section 64 of CEPA.

The Chemicals Management Plan (CMP) was introduced in 2006 to, in part, strengthen the integration of chemicals management programs across the Government of Canada ( HC, 2022e ). Key elements of the CMP have been addressing the priority existing chemicals from the DSL identified through Categorization for risk assessment pursuant to obligations under CEPA and the parallel pre-market assessments of new substances not on the DSL and notified through the New Substances Notification Regulations provisions made under CEPA.

Under the Existing Substances Risk Assessment Program (ESRAP), the approximate 4,300 priority substances were assessed over three phases (2006–2021), requiring the development of novel methodologies and assessment strategies to address data needs as the program evolved from a chemical-by-chemical approach to the assessment of groups and classes of chemicals ( HC, 2021b ). The limited empirical toxicity data available for many of the priority substances necessitated the implementation of fit-for-purpose approaches, including the use of computational tools and read-across. Further, the use of streamlined approaches ( HC, 2018 ) assisted the program to more efficiently address substances considered to be of low concern. Building on experiences and achievements from the CMP to date, the Government of Canada continues to expand on the vision for modernization. This shift takes into consideration new scientific information regarding chemicals to support innovative strategies for priority setting and to maintain a flexible, adaptive and fit-for-purpose approach to risk assessment to manage increasingly diverse and complex substances and mixtures ( HC, 2021b ; Bhuller et al., 2021 ).

The New Substances Program (NSP) is responsible for administering the New Substances Notification Regulations (NSNR, Statutory Orders and Regulations [SOR]/205-247 and SOR/2005-248) of CEPA ( HC, 2022f ). These regulations ensure that new substances (chemicals, polymers, biochemical, biopolymers, or living organisms) are not introduced into Canada before undergoing ecological and human health risk assessments, and that any appropriate or required control measures have been taken.

3.2.1 General requirements

Risk assessments conducted under CEPA use a WoE approach while also applying the precautionary principle. For existing substances on the DSL, there are no prescribed data requirements to inform the assessment of a substance to determine whether it is toxic or capable of becoming toxic as defined under Section 64 of CEPA. As such, an essential first step in the risk assessment process is the collection and review of a wide range of hazard and exposure information on each substance or group of substances from a variety of published and unpublished sources, stakeholders, and various databases ( HC, 2022d ).

The NSNR (Chemicals and Polymers) require information be submitted in a New Substances Notification (NSN) prior to import or manufacture of a new chemical, polymer, biochemical, or biopolymer in Canada. The NSNR (Chemicals and Polymers) also require that a notifier submit all other relevant data in their possession relevant to the assessment. Subsection 15(1) of the NSNR (Chemicals and Polymers) states that conditions and procedures used must be consistent with conditions and procedures set out in the OECD TG that are current at the time the test data are developed, and should comply with GLP.

Information in support of a NSN may be obtained from alternative test protocols, WoE, read-across, as well as from (Q)SARs [calculation or estimation methods (e.g., EPI Suite)]. The NSP may use various NAMs in their risk assessment, and may accept (and has accepted) test data which use NAMs, as discussed in further detail below.

3.2.2 Regulatory flexibility

For existing substances on the DSL under CEPA, there are no set submission requirements prior to an assessment, which inherently presents the need for flexibility and the opportunity to integrate novel approaches. NAM data are often used to support the assessment of the potential for risk from data poor substances. Since these data poor substances are unlikely to have required or available guideline studies, NAMs, including computational modelling, in vitro assays, QSAR and read-across, are used as approaches to address data needs offering an opportunity for a risk-based assessment where this may have been challenging in the past ( HC, 2022a ). For new substances, the NSP supports ongoing NAM development, as well as monitoring studies, to provide information on levels of substances of interest in the environment; both are used to fill risk assessment data gaps. In 2021, the NSP published a draft updated Guidance Document for the Notification and Testing of New Substances: Chemicals and Polymers ( HC, 2021c ). Section 8.4 of this Guidance Document lists examples of accepted test methods, which could in the future include NAMs as they are shown to be scientifically valid. Under the NSNR, alternative approaches will be acceptable when, in the opinion of the NSP, they are determined to provide a scientifically valid measure of the endpoint under investigation that is deemed sufficient for the purposes of the risk assessment. NAM data are evaluated on a case-by-case basis and can form part of the WoE of an assessment.

3.2.3 Implementation of NAMs

Given the paucity of data available for many substances on the market, as well as for new substances, there is a long history of using alternative approaches for hazard identification and characterization in support of new and existing substances risk assessment decisions. Over the last 2 decades, a variety of NAMs have been used by different program areas to address information gaps for risk assessment. The approaches implemented have been fit-for-purpose and largely determined by the data need, the timeline, the type of chemical(s), and the level of complexity associated with the assessment ( HC, 2016a ). Most notably for existing substances, in silico models, (Q)SAR, and read-across have been the most widely used methods with the progressive adoption and expanded use of computational toxicology and automated approaches ongoing for both ESRAP and the NSP. More specific details on the evolution of the ESRAP under CEPA are highlighted in the CMP Science Committee meeting report ( HC, 2021b ).

There are currently no formal criteria that have been published in order to achieve regulatory acceptance for the implementation of NAMs for existing substances in Canada. However, experience and efficiencies have been gained through the strategic development and implementation of streamlined risk-based approaches that support rapid and robust decision-making. To this end, a number of science approach documents (SciAD) have been published describing and demonstrating the implementation of NAMs to evaluate the potential for environmental or human health risk from industrial substances ( HC, 2022c ). SciADs are published under section 68 of CEPA, and do not include regulatory conclusions; however, the approach and results described within a SciAD may form the basis for a risk assessment conclusion when used in conjunction with any other relevant and available information. Furthermore, the implementation of NAMs as described in SciADs can also be used to support the identification of priorities for data gathering, data generation, further scoping, and risk assessment ( HC, 2022c ).

In advancing the vision for progressive chemicals management programs, which includes reduced use of animals and integration of NAMs, it is recognized that there is an ongoing need to develop flexible, adaptive, and innovative approaches. Accordingly, the ESRAP continues to expand the use of computational and in vitro models as well as evidence integration strategies to identify and address emerging priority substances. Key to successful implementation moving forward are the productive partnerships with the international regulatory and research communities to continue to build confidence and harmonization for the use of alternative test methods and strategies in chemical risk assessment ( Krewski et al., 2020 ; Bhuller et al., 2021 ).

Data generated using NAMs may be accepted to fulfil any of the NSNR’s test data requirements for an NSN when, in the opinion of the NSP, such data are determined to provide a scientifically valid measure of the endpoint under investigation that is deemed sufficient for the purposes of the risk assessment. The NSP will assess if the method has been satisfactorily validated in terms of scientific rigor, reproducibility, and predictability. Guidance is provided to notifiers who wish to submit information using NAMs during Pre-Notification Consultation meetings with NSP staff, or notifiers can consult Sections 5.4 and 8.4 of the respective Guidance Document ( HC, 2021c ). Alternative methods that may be accepted by the NSP to meet NSNR requirements include any internationally recognized and accepted test methods (e.g., in vitro skin irritation, gene mutation, and chromosomal aberration). Data such as (Q)SAR, read-across (greater than 80% structural similarity), and WoE may be accepted on a case-by-case basis.

3.2.3.1 Examples of NAM applications

As noted above, beyond the use of in silico models and read-across, examples of NAM applications for existing substances have been published as SciADs outlining NAM-based methods for prioritization and assessment ( HC, 2022c ). Specifically, the SciAD “Threshold of Toxicological Concern (TTC)-based Approach for Certain Substances” has been applied to evaluate a subset of existing substances on the DSL identified as priorities for assessment under subsection 73(1) of CEPA and/or were considered a priority based on human health concerns ( HC, 2016c ). More recently, the SciAD “Bioactivity exposure ratio: Application in priority setting and risk assessment approach” was developed outlining a quantitative risk-based approach to identify substances of greater potential concern or substances of low concern for human health ( HC, 2021f ). This proposed approach for NAM application builds on a broad retrospective analysis under the APCRA ( Paul Friedman et al., 2020 ) and considers high-throughput in vitro bioactivity together with high-throughput toxicokinetic modelling to derive an in vitro -based point of departure. As technologies continue to advance and additional sources of data from NAMs emerge, these may also be considered in the ongoing expansion of the approach to support the derivation of molecular-based PODs as part of a tiered testing scheme. Further work is underway to build approaches for the interpretation of transcriptomics data and to enhance the use of QSAR and machine learning to enrich evidence integration and WoE evaluation using IATA frameworks across toxicological endpoints of regulatory relevance.

New substances are inherently data-poor substances and, as a result, the NSP typically accepts a variety of alternative approaches and NAM data to meet data requirements under the NSNR. QSAR data and read-across data using analogues have historically been used to meet data requirements under the NSNR, particularly for physico-chemical data requirements or in combination with other data to provide a WoE for toxicity data. More recently, newly validated in vitro methods for skin irritation and skin sensitization ( OECD, 2021a ) have been accepted to meet data requirements under the NSNR. The NSP participates in active research programs to develop NAMs for complex endpoints, such as genotoxicity and systemic toxicity. Although not a regulatory requirement, in vitro eye irritation tests are also frequently received by the NSP.

3.3 European Union

In 2006, a significant number of updates and revisions were introduced into the EU chemicals policy with the introduction of Regulation (EC) No 1907/2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) ( EC, 2006 ). REACH entered into force on 1 June 2007, and introduced a single system for the regulation of chemicals, transferring the burden of proof concerning the risk assessment of substances from public authorities to companies. The purpose of REACH, according to Article 1(1), is to “ensure a high level of protection of human health and the environment, including the promotion of alternative methods for assessment of hazards of substances, as well as the free circulation of substances on the internal market while enhancing competitiveness and innovation” ( EC, 2006 ). The Regulation established ECHA to manage and implement the technical, scientific, and administrative aspects of REACH. Enforcement of REACH is each EU Member State’s responsibility and, therefore, ECHA has no direct enforcement responsibilities ( ECHA, n.d.a ). In addition to REACH, Regulation (EC) 1272/2008 on classification, labelling and packaging of substances and mixtures (CLP regulation) ( EC, 2008b ) was introduced to align the EU chemical hazard labeling system with the United Nations Economic Commission for Europe (UNECE)’s Globally Harmonised System of Classification and Labelling of Chemicals (GHS). Both REACH and CLP regulation are currently undergoing extensive revisions at the time of submission of this manuscript.

3.3.1 General requirements

REACH applies to all chemical substances; however, certain substances that are regulated by other legislations ( e.g. , biocides, PPPs, or medical drugs) may be (partially) exempted from specific requirements ( ECHA, n.d.f ). Substances used in cosmetic products remain a contentious issue with them being subject to an animal testing ban under the EU regulation on cosmetics products ( EC, 2009b ), yet ECHA continues to request new in vivo testing under certain circumstances such as for risk assessment for worker exposure ( ECHA, 2014 ). The interplay between the two regulations is under review by the European Court of Justice (‘Symrise v ECHA’ (2021), T655/20, ECLI:EU:T:2021:98 and ‘Symrise v ECHA' (2021), T-656/20, ECLI:EU:T:2021:99).

Whilst REACH is not a pre-marketing approval process in the strictest sense of the definition, it works on the principle of no data, no market with responsibility placed on registrants to manage the risks from chemicals and to provide safety information on the substances. Thus, companies bear the burden of proof to identify and manage the risks linked to the substances they manufacture or import and place on the market in the EU. They must demonstrate how the substance can be safely used and must communicate the risk management measures to the users. Companies must register the chemical substances they manufacture or import into the EU at more than one tonne per year with ECHA. The registration requirement under REACH “applies to substances on their own, in mixtures, or, in certain cases, in articles” ( ECHA, n.d.c) . Registration is governed by the “one substance, one registration” principle, where manufacturers and importers of the same substance must submit their registration jointly. Companies must collect information on the properties and uses of their substances and must assess both the hazards and potential risks presented by these substances. The companies compile all of this information in a registration dossier and submit it to ECHA. The standard information requirements for the registration dossier depends on the tonnage band of the chemical substance ( ECHA, n.d.b ). The information required is specified in Annexes VI to X of REACH and include physico-chemical data, toxicology information, and ecotoxicological information.

ECHA receives and evaluates individual registrations for their compliance ( ECHA, n.d.f ). EU Member States evaluate certain substances to clarify initial concerns for human health or for the environment. ECHA’s scientific committees assess whether any identified risks from a hazardous substance are manageable, or whether that substance must be banned. Before imposing a ban, authorities can also decide to restrict the use of a substance or make it subject to a prior authorization.

The CLP regulation requires that relevant information on the characteristics of a substance, classification of toxicity endpoints, and pertinent labelling of a substance or substances in mixtures be notified to ECHA when placed on the EU market ( EC, 2008b ). In this way, the toxicity classification and labeling of the substance are harmonized both for chemical hazard assessment and consumer risk. In cases where there are significant divergences of scientific opinion, further review of scientific data can proceed ( EC, 2008b ). New testing is normally not requested for CLP purposes alone unless all other means of generating information have been exhausted and data of adequate reliability and quality are not available ( ECHA, n.d.e ).

The discussion that follows is focused on the EU’s efforts under REACH to reduce testing on vertebrate animals to assess human health effects. This concept lies at the very foundation of REACH, which states in the second sentence of the Preamble that it should “promote the development of alternative methods for the assessment of hazards of substances” ( EC, 2006 ).

3.3.2 Regulatory flexibility

According to Article 13(1) of REACH, “for human toxicity, information shall be generated whenever possible by means other than vertebrate animal tests, through the use of alternative methods, for example, in vitro methods or qualitative or quantitative structure-activity relationship models or from information from structurally related substances (grouping or read-across)” ( EC, 2006 ). Further, according to Article 13(2), the European Commission may propose amendments to the REACH Annexes and the Commission Regulation, which lists approved test methods ( EC, 2008a ), to “replace, reduce or refine animal testing.” Under Title III of REACH, on Data Sharing and Avoidance of Unnecessary Testing, Article 25(1) requires that testing on vertebrate animals must be undertaken only as a last resort; however, the interpretation of Articles 13 and 25 of REACH are often matters of dispute in European Court of Justice (‘Federal Republic of Germany v Esso Raffinage’ (2021), C-471/18 P, ECLI:EU:C:2021:48), ECHA Board of Appeal ( e.g. , cases A-005-2011 and A-001–2014), and European Ombudsman cases (cases 1568/2012/(FOR)AN, 1606/2013/AN and 1130/2016/JAS).

In addition, to reduce animal testing and duplication of tests, study results from tests involving vertebrate animals should be shared between registrants ( EC, 2006 ). Furthermore, where a substance has been registered within the last 12 years, a potential new registrant must, according to Article 27, request from the previous registrant all information relating to vertebrate animal testing that is required for registration of the substance. Before the deadline to register all existing chemicals by 31 May 2018, companies (i.e., manufacturers, importers, or data owners) registering the same substance were legally required to form substance information exchange fora (SIEFs) to help exchange data and avoid duplication of testing for existing chemicals ( EC, 2006 ).

REACH standard information requirements for registration dossiers contain upfront testing requirements on vertebrate animals with some flexibility to allow the use of NAMs. Registrants are encouraged to collect all relevant available information on the substance, including any existing data (human, animal, or NAMs), (Q)SAR predictions, information generated with analogue chemicals (read-across), and in chemico and in vitro tests. In addition, REACH foresees that generating information required in Annexes VII-X may sometimes not be necessary or possible. In such cases, the standard information for the endpoint may be waived. Criteria for waiving are outlined in Column 2 of Annexes VII-X, while criteria for adapting standard information requirements are described in Annex XI of REACH ( ECHA, 2016a ). In addition to the use of OECD test guidelines, data from in vitro methods that meet internationally agreed pre-validation criteria as defined in OECD GD 34 are considered suitable for use under REACH when the results from these tests indicate a certain dangerous property. However, negative results obtained with pre-validated methods have to be confirmed with the relevant in vivo tests specified in the Annexes. Whether the aforementioned current revision of REACH and CLP regulations will bring about opportunities to include more NAMs in the assessment of industrial chemicals or lead to an increase in animal testing is to be seen.

3.3.3 Implementation of NAMs

The REACH annexes were amended in 2016 and 2017 to require companies to use NAMs for certain endpoints under certain conditions. Following these amendments, the use of non-animal tests have tripled for skin corrosion/irritation, quadrupled for serious eye damage/eye irritation, and increased more than 20-fold for skin sensitization ( ECHA, 2020 ).

REACH requires that robust study summaries be published on the ECHA website. This helps registrants identify additional data for their registrations and facilitates the identification of similar or identical substances ( ECHA, 2020 ). ECHA’s public chemical database may also be used to conduct retrospective data analyses and other research efforts, when the level of detailed data needed are present in such reports ( Luechtefeld et al., 2016a ; Luechtefeld et al., 2016b ; Luechtefeld et al., 2016c ; Luechtefeld et al., 2018 ; Knight et al., 2021 ).

ECHA engages in OECD expert groups and reviews test guidelines for both animal and non-animal methods. For example, ECHA contributed to the in vitro OECD test guidelines for skin and eye irritation in 2016 and skin sensitization in 2017. In addition, ECHA was involved in the finalization of the OECD “Defined Approaches on Skin Sensitisation Test Guideline” ( OECD, 2021a ). In October 2021, ECHA published advice on how REACH registrants can use the defined approaches guideline, and this was the first official guidance outlining how to use in silico tools, such as the QSAR Toolbox, to assess skin sensitization ( ECHA, 2021 ). Furthermore, ECHA also engages in largescale European research projects (e.g., EU-ToxRisk) where they review mock dossiers based on NAMs that have been developed in these projects.

Before registrants conduct higher-tier tests for assessing the safety of chemicals they import or manufacture, Article 40 of REACH requires that they submit details on their testing plans to ECHA ( ECHA, n.d.d ). In that submission, companies must detail how they considered NAMs before proposing an animal test. ECHA must agree on these proposals before a company can conduct a new animal test under Annex IX or X. ECHA may reject, accept, or modify the proposed test. As required by REACH, all testing proposals involving testing on vertebrate animals are published on ECHA’s website to allow citizens and organizations the opportunity to provide information and studies about the substance in question (ECHA, n.d.d). ECHA will inform the company that submitted the testing proposal of the Member State Committee’s decision and is required to take into account all studies and scientifically valid information submitted as part of the third-party consultation when making its decision.

3.3.3.1 Examples of NAM application

The most commonly used NAM under REACH is the read-across approach, where relevant information from analogous substances is used to predict the properties of target substances ( ECHA, 2020 ). Before read-across is accepted by ECHA, it must be justified by the registrant and, therefore, to facilitate its use, ECHA developed a read-across assessment framework ( ECHA, 2017 ). Additionally, ECHA is holding different expert meetings with stakeholders including industry representatives and NGOs to enhance and combine knowledge and to avoid overlap and duplication. Thus, ECHA encourages companies to avoid duplicate animal tests and share any data they have on their substance if requested by a registrant of an analogous substance. For example, based on in vitro ToxTracker assay results and read-across data from the analogue substance aminoethylpiperazine, ECHA has not requested in vivo genotoxicity data for N,N,4-trimethylpiperazine-1-ethylamine, which was registered by two companies in a joint submission ( ECHA, 2019 ) .

4 Pesticides and plant protection products

4.1 united states.

The Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA; 7 USC §136) requires all pesticides sold or distributed in the US to be registered with the EPA, unless otherwise exempted. EPA then has authority under the Federal Food, Drug, and Cosmetic Act (FD&C Act; 21 USC §301 et seq. ) to set the maximum amount of pesticide residues permitted to remain in/on food commodities or animal feed, which are referred to as tolerances. In 1996, both of these statutes were amended by the Food Quality Protection Act (FQPA), which placed new requirements on EPA, including making safety findings (i.e., “a reasonable certainty of no harm”) when setting tolerances (Pub.L. 104-170, 1996).

OPP, within EPA’s OCSPP, has the delegated authority with administering the above laws and is responsible for pesticide evaluation and registration. This includes registration of new pesticide active ingredients and products, as well as new uses for currently registered pesticides. Additionally, OPP reviews each registered pesticide at least every 15 years as part of the Registration Review process to determine whether it continues to meet registration standards. A pesticide product may not be registered unless the EPA determines that the pesticide product will not cause unreasonable adverse effects on the environment (as defined by 7 USC §136(bb)).

4.1.1 General requirements

Data requirements for pesticide registration are dependent on the type of pesticide (i.e., conventional, biopesticide, or antimicrobial) and use pattern (e.g., food versus non-food, or anticipated routes of exposure) and are laid out in 40 CFR Part 158. Unlike TSCA, FIFRA and its implementing regulations require substantial upfront testing to register a pesticide in the US, such as product chemistry data to assess labeling, product performance data to support claims of efficacy, studies to evaluate potential hazards to humans, studies to evaluate potential hazards to non-target organisms, environmental fate data, and residue chemistry and exposure studies to determine the nature and magnitude of residues. The data are used to conduct comprehensive risk assessments to determine whether a pesticide meets the standard for registration.

4.1.2 Regulatory flexibility

US regulations give EPA substantial discretion to make registration decisions based on data that the Agency deems most relevant and important for each action. As stated in the CFR, under Section 158.30, the studies and data required may be modified on an individual basis to fully characterize the use and properties of specific pesticide products under review. Also, the data requirements may not always be considered appropriate. For instance, the properties of a chemical or an atypical use pattern could make it impossible to generate the required data or the data may not be considered useful for the evaluation. As a result, Section 158.45 permits OPP to waive data requirements as long as there are sufficient data to make the determinations required by the applicable statutory standards.

To assist staff in focusing on the most relevant information and data for assessment of individual pesticides, OPP published “Guiding Principles for Data Requirements” ( EPA, 2013a ). The document describes how to use existing information about a pesticide to identify critical data needs for the risk assessment, while avoiding generation of data that will not materially influence a pesticide’s risk profile and ensuring there is sufficient information to support scientifically sound decisions. When data from animal testing will not contribute to decision making, OPP has developed processes to waive guideline studies and/or apply existing toxicological data for similar substances (i.e., bridging). Detailed guidance on the scientific information needed to support a waiver or bridging justification has been developed by OPP for acute ( EPA, 2012 ; EPA, 2016a ; EPA, 2020c ) and repeat dose ( EPA, 2013b ) mammalian studies.

Interdivisional expert committees within OPP are tasked with considering waiver requests on a case-by-case basis. The Hazard and Science Policy Council (HASPOC) is tasked with evaluating requests to waive most guideline mammalian toxicity studies, except acute systemic lethality and irritation/sensitization studies (which are referred to as the acute six-pack). HASPOC is comprised of toxicologists and exposure scientists from divisions across OPP focused on conducting human health risk assessments and it utilizes a WoE approach described in its guidance on “Part 158 Toxicology Data Requirements: Guidance for Neurotoxicity Battery, Subchronic Inhalation, Subchronic Dermal and Immunotoxicity Studies” ( EPA, 2013b ). This includes consideration of multiple lines of evidence, such as physico-chemical properties, information on exposure and use pattern, toxicological profiles, pesticidal and mammalian mode of action information, and risk assessment implications. Although this guidance was developed to address particular toxicity studies, the same general WoE approach is applied by HASPOC when considering the need for other toxicity studies for pesticide regulatory purposes. Between 2012 and 2018, the most common studies requested to be waived were acute and subchronic neurotoxicity, subchronic inhalation, and immunotoxicity studies ( Craig et al., 2019 ). For the acute six-pack studies, the Chemistry and Acute Toxicology Science Advisory Council (CATSAC) was formed to consider bridging proposals and/or waivers using the aforementioned waiving and bridging guidance documents. For example, following a retrospective analysis, the agency released guidance for waiving acute dermal toxicity tests ( US EPA, OCSPP, and OPP, 2016 ). The progress of HASPOC and CATSAC is continuously tracked and reported on an annual basis ( Craig et al., 2019 ; EPA, 2020a , 2021b ).

Beyond waiving studies that do not contribute to regulatory decision making, OPP has the ability to use relevant NAMs to replace, reduce, and refine animal studies. The CFR provides OPP with considerable flexibility under Section 158.75 to request additional data beyond the Part 158 data requirements that may be important to the risk management decision. NAMs can be considered and accepted for these additional data, when appropriate.

4.1.3 Implementation of NAMs

Several documents describe OPP’s strategies to reduce reliance on animal testing and incorporate relevant NAMs. For example, in addition to overarching EPA strategic plans (see Section 2.1.2.1.), OPP consulted the FIFRA Scientific Advisory Panel (SAP) on strategies and initial efforts to incorporate molecular science and emerging in silico and in vitro technologies into an enhanced IATA ( EPA, 2011 ). The long-term goal identified for this consultation was a transition from a paradigm that requires extensive in vivo testing to a hypothesis-driven paradigm where NAMs play a larger role.

Unlike TSCA that requires OPPT to maintain a (non-exhaustive) list of NAMs that are accepted, OPP does not have a similar statutory requirement. However, OPP does maintain a website with strategies for reducing and replacing animal testing based on studies and approaches that are scientifically sound and supportable ( EPA, 2022a ). For many of these strategies, OPP has worked closely with other EPA offices, including OPPT and ORD, to develop and implement plans and tools that advance NAMs. Additionally, OPP works with a wide range of external organizations and stakeholders, including other US federal agencies, international regulatory agencies, animal protection groups, and pesticide registrants.

These collaborations have resulted in several agency documents for specific NAM applications. As mentioned in previous sections, there have been national and international efforts to develop defined approaches for skin sensitization in which OPP participated, along with OPPT, PMRA, ECHA, and other stakeholders. In 2018, a draft policy document was published jointly by OPP and OPPT on the use of alternatives approaches ( in silico , in chemico, and in vitro ) that can be used to evaluate skin sensitization in lieu of animal testing with these approaches accepted as outlined in the draft policy upon its release ( EPA, 2018b ). As international work develops through the OECD, this policy will be updated to accept additional defined approaches as appropriate. OPP also has a policy on the “Use of an Alternate Testing Framework for Classification of Eye Irritation Potential of EPA Pesticide Products,” which focuses on the testing of antimicrobial cleaning products but can be applied to conventional pesticides on a case-by-case basis ( EPA, 2015 ).

Collaborative efforts have also resulted in numerous publications in scientific journals that allow for communication of scientific advancements and analyses, while building confidence in NAM approaches that can support regulatory decisions. For example, analyses have been published demonstrating that many of the in vitro or ex vivo methods available for eye irritation are equivalent or scientifically superior to the rabbit in vivo test ( Clippinger et al., 2021 ). Additionally, OPP established a pilot program to evaluate a mathematical tool (GHS Mixtures Equation) as an alternative to animal oral inhalation toxicity studies for pesticide formulations. After closing the submission period in 2019, OPP worked with NICEATM to conduct retrospective analyses, which demonstrated the utility of the GHS Mixtures Equation to predict oral toxicity, particularly for formulations with lower toxicity ( Hamm et al., 2021 ). Furthermore, OPP participated in a project to rethink chronic toxicity and carcinogenicity assessment for agrochemicals (called “ReCAAP”). The workgroup, consisting of scientists from government, academia, non-governmental organizations, and industry stakeholders, aimed to develop a reporting framework to support a WoE safety assessment without conducting long-term rodent bioassays. In 2020, an EPA Science Advisory Board meeting was held to discuss reducing the use of animals for chronic and carcinogenicity testing, which included comment on the ReCAAP project (EPA, 2020f), and feedback from the consultation was incorporated into a published framework ( Hilton et al., 2022 ).

4.1.3.1 Examples of NAM application

OPP has recently used NAMs to derive points of departure for human health risk assessment. For isothiazolinones, which are material preservatives that are known dermal sensitizers, NAMs were utilized to support a quantitative assessment ( EPA, 2020g ). In chemico and in vitro assays were performed on each chemical to derive concentrations that can cause induction of skin sensitization and were used as the basis of the quantitative dermal sensitization evaluation. The NAM approaches used in the assessment have been shown to be more reliable, human-relevant, and mechanistically driven, and able to better predict human sensitizing potency when compared to the reference test method, the mouse local lymph node assay ( EPA, 2020d ).

In addition, as part of a registration review, a NAM approach was used to evaluate inhalation exposures for the fungicide chlorothalonil, which is a respiratory contact irritant ( EPA, 2021c ). The approach utilizes an in vitro assay to derive an inhalation point of departure in conjunction with in silico dosimetry modeling to calculate human equivalent concentrations for risk assessment ( Corley et al., 2021 ; McGee Hargrove et al., 2021 ). The approach, which was reviewed and supported by a FIFRA Scientific Advisory Panel ( EPA, 2018a ), provided an opportunity to overcome challenges associated with testing respiratory irritants, while also incorporating human relevant information.

Further, OPP has been shifting its testing focus from developmental neurotoxicity (DNT) guideline studies to more targeted testing approaches. In addition to evaluating life stage sensitivity with studies based on commonly accepted modes of action, such as comparative cholinesterase assays and comparative thyroid assays, researchers from ORD have participated in an international effort over the past decade to develop a battery of NAMs for fit-for-purpose evaluation of DNT ( Fritsche et al., 2017 ; Bal-Price et al., 2018a ; Bal-Price et al., 2018b ; Sachana et al., 2019 ). As part of this effort, ORD researchers developed in vitro assays using microelectrode array network formation array (MEA NFA) and high-content imaging (HCI) platforms to evaluate critical neurodevelopmental processes. Additional in vitro assays have been developed by researchers funded by EFSA and, together with the ORD assays, form the current DNT NAM battery. The FIFRA SAP supported the use of the data generated by the DNT NAM battery as part of a WoE for evaluating DNT potential and recognized the potential for the battery to continuously evolve as the science advances ( EPA, 2020i ). The OECD DNT expert group, which includes staff from OPP and ORD as well as representatives from other US agencies (e.g., NTP, FDA), is also considering several case studies on integrating the DNT battery into an IATA. Furthermore, data from the battery along with toxicokinetic assessment and available in vivo data were recently used in a WoE to support a DNT guideline study waiver ( Dobreniecki et al., 2022 ).

OPP also collaborated with NICEATM to complete retrospective analyses of dermal penetration triple pack studies ( Allen et al., 2021 ). A triple pack consists of an in vivo animal study and in vitro assays using human and animal skin and are used to derive DAFs applied to convert oral doses to dermal-equivalent doses to assess the potential risk associated with dermal exposures. The retrospective analyses demonstrated that in vitro studies alone provide similar or more protective estimates of dermal absorption with limited exception. The use of human skin for human health risk assessment has the added advantage of being directly relevant to the species of interest and avoiding overestimation of dermal absorption using rat models. These analyses are being used by OPP to support its consideration of results from acceptable in vitro studies in its WoE evaluations to determine an appropriate dermal absorption factor (DAF) for human health risk assessment on a chemical-by-chemical basis.

In Canada, pest control products and the corresponding technical grade active ingredient are regulated under the Pest Control Products Act (PCPA; SC 2002, c.28). The PCPA and its associated Regulations govern the manufacture, possession, handling, storage, transport, importation, distribution, and use of pesticides in Canada. Pesticides, as defined in the PCPA, are designed to control, destroy, attract, or repel pests, or to mitigate or prevent pests’ injurious, noxious, or troublesome effects. Therefore, these properties and characteristics that make pesticides effective for their intended purposes may pose risks to people and the environment.

PMRA is the branch of Health Canada responsible for regulating pesticides under the authority of the PCPA. Created in 1995, PMRA consolidates the resources and responsibilities for pest management regulation in Canada. PMRA’s primary mandate is to prevent unacceptable risks to Canadians and the environment from the use of these products. Section 7 of the PCPA provides the authority for PMRA to apply modern, evidence-based scientific approaches to assess whether the health and environmental risks of pesticides proposed for registration (or amendment) are acceptable, and that the products have value. Section 16 of the PCPA provides the legislative oversight for PMRA to take the same approach when regularly and systematically reviewing whether pesticides already on the Canadian market continue to meet modern scientific standards. PMRA’s guidance document “A Framework for Risk Assessments and Risk Management of Pest Control Products” provides the well-defined and internationally recognized approach to risk assessment, management, and decision-making. This framework includes insights on how interested and affected parties are involved in the decision-making process. It also describes the components of the risk (health and environment) and value assessments. For example, the value assessment’s primary consideration is whether the product is efficacious. In addition, this assessment contributes to the establishment of the use conditions required to assess human health and environment risks ( HC, 2021a ).

4.2.1 General requirements

In Canada, many pest control products are categorized as conventional chemicals, and include insecticides, fungicides, herbicides, antimicrobials, personal insect repellents, and certain companion animal products such as spot-on pesticides for flea and tick control. Non-conventional chemicals, such as biopesticides (e.g., microbial pest control agents) and essential oil-based personal insect repellents, are also regulated under the PCPA.

The scope of the information provided in this section is most applicable to the health component of the risk assessment for domestic registrations of conventional chemicals (the end-use product and active ingredient). The information provided hereafter excludes the value and environment components along with products, such as food items (e.g., table salt), which are of interest to the organic growers in Canada. Biopesticides and non-conventionals are also outside the scope of this paper.

PMRA relies on a system that links the data requirements (data-code or DACO tables) to proposed use-sites, which are organized using three categories: Agriculture, Industry, and Society ( HC, 2006 ). Given that pest control products can be used on more than one use-site, these sites are further sub-categorized. For example, PMRA’s use-site category 14 is for “Terrestrial Food Crops” and includes crops grown outdoors as a source for human consumption ( HC, 2013b ). The system of linking DACOs with use-site categories is similar to what is used by the US EPA and internationally by the OECD ( HC, 2006 ). The PMRA DACO tables include required (R) and conditionally required (CR) data that are tailored for each use site and take into consideration potential routes, durations, and sources of exposure to humans and the environment. It is important to note that the CR data are only required under specified conditions. In addition, PMRA will consider a request to waive any data requirement, but such waiver requests must be supported by a scientific rationale demonstrating that the data are not required to ensure the protection of human health. In particular, PMRA published a guidance document for waiving or bridging of mammalian acute toxicity tests for pesticides in 2013 ( HC, 2013a ). This document served as the starting point for the development and subsequent release of the 2017 OECD technical document on the same subject ( OECD, 2017 ).

4.2.2 Regulatory flexibility

The specific data requirements for the registration of pest control products in Canada are not prescribed in legislation under the PCPA. PMRA, therefore, has greater flexibility in either adopting or adapting methods under the PCPA in comparison to other jurisdictions where these data requirements are established in law. Therefore, while the PCPA provides the overarching components for the assessments (i.e., health, environment, and value) it also provides for the flexibility to use policy instruments along with guidance documents to provide the details on the data requirements to satisfy these legislative components. This approach also provides the opportunity for PMRA to engage all stakeholders through webinars, meetings, and public consultations when developing or making major changes to these documents. This open and transparent approach is aligned with PMRA’s strategic plan ( HC, 2016d ), which includes incorporating modern science by building scientific, regulatory, and public confidence in these approaches through collaborative processes. The ability to rely on policy instruments and guidance documents does not preclude PMRA from making regulatory changes, when necessary; however, the experience, thus far, with NAMs supports the current approach of relying on multi-stakeholder collaborations that result in the development of guidance documents, science policy notes, and/or published articles in reputable scientific journals.

4.2.3 Implementation of NAMs

PMRA’s 2019-2020 annual report highlights the 25th anniversary of this branch of Health Canada while noting a major transformation initiative of the pesticide program ( HC, 2021e ). Building upon the strategic plan (see Section 2.2.2 ), the program renewal project considers the changing landscape and the need for PMRA to keep pace with this change. The 2019-2020 and 2020-2021 reports include a section on evaluating new technologies, which includes opportunities to reduce animal testing wherever possible. Specifically, the use of NAMs, including in vitro assays, predictive in silico models, mechanistic studies, and existing data for human health and environmental assessment of pesticides is noted ( HC, 2021e ; Hc, 2022b ).

Bhuller et al. (2021) provides the first Canadian regulatory perspective on the approach and process towards the implementation of NAMs in Canada for pesticides and industrial chemicals ( Bhuller et al., 2021 ). It acknowledges foundational elements, such as the 2012 Council of Canadian Academies ( CCA, 2012 ) expert panel report, “Integrating Emerging Technologies into Chemical Assessment,” used to establish the overall vision. The process for identifying, exploring, and implementing NAMs emphasizes the importance of mobilizing teams and fostering a mindset that enables a regulatory pivot towards NAMs. In addition, the importance of engagement and multi-stakeholder collaboration is identified as a pillar for building regulatory, scientific, and public confidence in NAMs along with the broader acceptance of the alternative approaches.

PMRA collaborates with stakeholders on the development of NAMs and their potential implementation for regulatory purposes. For example, PMRA is collaborating with the interested community through several ongoing multi-stakeholder initiatives designed to explore NAMs, at the national and international levels ( Bhuller et al., 2021 ). Another example is several academic-led initiatives along with research and consulting firms that are immersed in developing models, including open-source models. This includes the University of Windsor’s Canadian Centre for Alternatives to Animal Methods (CCAAM) and CaCVAM. Within Health Canada, voluntary efforts amongst regulatory and research scientists have resulted in the publication of NAM-relevant documents, such as the current Health Canada practices for using toxicogenomics data in risk assessment ( HC, 2019 ).

4.2.3.1 Examples of NAM application

Multiple NAMs and alternatives to animal testing have been co-developed, adapted, or adopted by the PMRA. Examples include the OECD defined approach for skin sensitization ( OECD, 2021a ), use of a WoE framework for chronic toxicity and cancer assessment ( Hilton et al., 2022 ), and PMRA’s “Guidance for Waiving or Bridging of Mammalian Acute Toxicity Tests for Pesticides” ( HC, 2013a ). In addition, PMRA no longer routinely requires the acute dermal toxicity assay ( HC, 2017 ), the one-year dog toxicity test ( Linke et al., 2017 ; HC, 2021d ), or the in vivo dermal absorption study ( Allen et al., 2021 ) in alignment with the US EPA. PMRA will consider these and other NAMs in lieu of animal testing for specific pesticides by applying a WoE approach to ensure that the available information is sufficient and appropriate for hazard characterization and the assessment of potential human health risks.

Building upon the strategic plan and the importance of staying current with scientific advancements in an open and transparent manner, PMRA’s DACO guidance document for conventional pesticides includes a document history table that enables PMRA to demonstrate the “evergreen” nature of the DACOs while providing an overview of the changes and the corresponding rationales ( HC, 2021d ). For example, PMRA’s science-policy work, resulting in no longer routinely requiring the acute dermal toxicity study, is captured in this table with a reference to the science-policy document (SPN 2017-03) ( HC, 2017 ). The latter then provides details on public consultation processes and the robust retrospective analysis that was undertaken under the auspices of the RCC ( HC, 2017 ).

4.3 European Union

In the EU, the term “pesticides” includes (1) active ingredient and PPPs, which are intended for use on plants in agriculture and horticulture, and (2) biocides, which are used in non-botanical applications, such as rodenticides or termiticides. PPPs and their active ingredients are regulated under Regulation (EC) No 1107/2009 ( EC, 2009a ). Commission Regulation (EU) No 283/2013 lists the data requirements for active ingredients ( EU, 2013c ), and Commission Regulation (EU) No 284/2013 lists the data requirements for PPPs ( EU, 2013d ). Biocides, however, are regulated separately under Regulation (EU) No 528/2012 and are not discussed in this paper ( EU, 2012 ). In addition, the CLP regulation (see Section 3.3 ) applies to both PPPs and biocides.

The EU is a diverse group of countries as it relates to food consumption, agricultural pests, weather, and level of development, thus, the risk assessment and management procedures were developed to account for the varied needs of different Member States. First, an evaluation of the active ingredient dossier is conducted by a Rapporteur Member State. Then, EFSA peer reviews the dossier evaluation. The peer reviewed risk assessment of the active ingredient is considered by the European Commission, who makes a proposal on whether to authorize the active ingredient, followed by the EU Member States, who vote on final risk management decisions. Once an active ingredient is authorized, individual Member States consider applications for approval of PPPs containing that active ingredient and propose maximum levels of pesticide residues permitted to remain in/on food commodities or animal feed. Finally, the European Commission (often with input from EFSA) will decide whether to approve those maximum residue levels.

Regulation of biocidal active ingredients and products proceeds via a similar route; however, peer review of the Member State assessments of the active ingredients are conducted by ECHA rather than EFSA. In 2017, ECHA and EFSA signed a memorandum of understanding to enhance cooperation between the agencies to facilitate coherence in scientific methods and opinions, and to share knowledge on matters of mutual interest. As a consequence, both agencies will evaluate the toxicological data package for a PPP.

4.3.1 General requirements

Similar to the US and Canada, there are a large number of up-front data requirements required to register a plant protection active ingredient in the EU, including studies to assess potential hazards to humans and non-target organisms. The toxicology data requirements for support of an active ingredient or PPP are listed in Commission Regulation (EU) No 283/2013 and Commission Regulation (EU) No. 284/2013, respectively, and can be fulfilled using OECD test guideline studies or other guidelines (such as US EPA guidelines) that address the toxicological endpoint of concern. A number of data requirements, such as in vivo neurotoxicity studies or two-year rodent cancer bioassays in a second species, are only required when triggered or with scientific justification.

4.3.2 Regulatory flexibility

Article 62(1) of Regulation (EC) No 1107/2009 requires that “testing on vertebrate animals for the purposes of this Regulation shall be undertaken only where no other methods are available.” Article 8(1)(d) and Article 33(3)(c) of the same Regulation requires applicants to justify, for each study using vertebrate animals, the steps taken to avoid testing on animals or duplication of studies. Similarly, for biocides, Article 62(1) of Regulation (EC) No 528/2012 states that “[i]n order to avoid animal testing, testing on vertebrates for the purposes of this Regulation shall be undertaken only as a last resort.”

The Commission Regulations, which list the data requirements for plant protection active ingredients and products, and their respective Communications ( EU, 2013a ; EU, 2013b ) were published in 2013 and therefore only refer specifically to a limited number of NAMs (e.g., in vitro and ex vivo methods to assess skin irritation and eye irritation).

Although point 5.2 in the Annex of both Commission Regulation (EU) No 283/2013 and 284/2013 allow for the use of other NAMs, as they become available, to replace or reduce animal use, the outdated list of methods to fulfil data requirements in the Commission Communications may encourage animal use where NAMs should be used. For example, the methods listed to fulfil the requirements for skin sensitization do not include any of the available in chemico or in vitro methods and do not refer to the OECD Guideline on Defined Approaches to Skin Sensitization ( EU, 2013a ; EU, 2013b ; OECD, 2021a ). Therefore, the Commission Communications need to be updated urgently and regularly to avoid the use of animals.

As outlined above, the regulatory landscape of the EU is one of specific regional considerations and interpretation of legislation by individual Member States. For example, some Member State regulatory authorities responsible for PPPs, including those from the Czech Republic ( SZU, n.d .), Sweden ( KEMI, 2021 ), and Slovenia ( Republika Slovenija, 2022 ), publicly align themselves with the legal requirement to justify the conduct of studies using vertebrate animals. Other Member States regulatory authorities, including those from the Netherlands ( Ctgb, n.d. ) and, pre-Brexit, the United Kingdom ( HSE, n.d .), interpret the regulation more rigorously and state that applicants or dossiers will not be considered if they are found to have breached Article 62 (testing on vertebrate animals as a last resort).

4.3.3 Implementation of NAMs

EFSA has been proactive in reducing animal testing and implementing reliable NAMs. For example, in 2009, EFSA published a scientific opinion covering the key data requirements for evaluation of pesticide toxicity that were amendable to NAMs ( EFSA, 2009 ). In 2012, EFSA initiated a series of scientific conferences to create a regular opportunity to engage with partners and stakeholders. Following its latest conference in 2018 and the break-out session “Advancing risk assessment science—human health,” Lanzoni et al. have emphasized that the human health risk assessment based on animal testing is challenged scientifically and ethically ( Lanzoni et al., 2019 ). They further mention the need for a paradigm shift in hazard and risk assessment and more flexible regulations.

EFSA has developed a chemical hazards database “OpenFoodTox 2.0” and funded collaborative research to develop generic toxicokinetic and toxicodynamic human and animal models to predict the toxicity of chemicals ( Dorne et al., 2018 ; Benfenati et al., 2020 ). Further, in 2019, EFSA published their opinion on the use of in vitro comparative metabolism (IVCM) studies for use in pesticide risk assessment ( EFSA, 2019 ). Currently, the IVCM study is a data requirement for new and renewal data packages being submitted in the EU. This study is intended to identify unique human metabolites as it compares to OECD TG 417, the toxicokinetic study currently performed in rats. Most recently, EFSA published their 2027 Strategy in which they state their goal, to develop and integrate NAMs for regulatory risk assessment ( EFSA, 2021 ). To help achieve this, EFSA launched a contract to develop a roadmap for action on NAMs to reduce animal testing ( Escher et al., 2022 ). The roadmap aims to define EFSA’s priorities for the incorporation of NAMs as well as to inform a multi-annual strategy for increasing the use of NAMs in human health risk assessment with a goal of minimizing animal testing ( EFSA, 2021 ). In addition, EFSA is in the process of developing guidance on the use of read-across and has launched several projects to evaluate NAMs in the context of IATA frameworks.

4.3.3.1 Examples of NAM application

EFSA has funded the development of in vitro assays that, together with the assays from ORD, form the current DNT NAM testing battery (see Section 4.1.3.1 ). In partnership with the OECD, EFSA held a workshop in 2017 on integrated approaches for testing and assessment of DNT ( EFSA and OECD, 2017 ), commissioned an external scientific report on the data interpretation from in vitro DNT assays ( Crofton and Mundy, 2021 ), and recently held a European stakeholder’s workshop on NAMs for DNT ( EFSA, 2022 ). In 2021, the EFSA Panel on Plant Protection Products and their Residues (PPR) developed AOP-informed IATA case studies on DNT risk assessment ( EFSA PPR Panel et al., 2021 ). The development of a new OECD Guidance Document on DNT in vitro assays is being co-led by EFSA, the US, and Denmark ( OECD, 2021b ).

In 2017, EFSA updated its guidance on dermal absorption that was initially published in 2012. The guidance presents elements for a tiered approach including “ in vitro studies with human skin (regarded to provide the best estimate)” ( EFSA et al., 2017 ), thereby reducing the use of animals while also increasing the relevance of the data for human risk assessment.

Furthermore , in silico modeling software, data mining, and read-across can be used for a variety of applications in support of pesticide registrations within the EU. Specifically, OECD-Toolbox, Derek Nexus, and OASIS TIMES are often used for the evaluation of toxicological significance of metabolites and impurities, and in support of active ingredient conclusions, especially related to genotoxicity ( Benigni et al., 2019 ).

5 Conclusion

Due to widespread interest in the use of testing approaches that are reliable and relevant to human biology, NAMs for hazard and risk assessment are being rapidly developed. It is important to understand the existing regulatory frameworks, and their flexibility or limitations for the implementation of fit for purpose NAMs. This article provides an overview of the regulatory frameworks for the use of NAMs in the assessment of industrial chemicals and pesticides, in the US, Canada, and EU. However, similar collaborative efforts and opportunities to use NAMs in regulatory submissions exist in other sectors and countries. In general, replacing animal use is an important goal for regulatory agencies and, as such, regulators continue to explore the potential of NAMs to efficiently provide more reliable and relevant information about whether and how a chemical may cause toxicity in humans. The regulations reviewed in this paper highlight the many existing opportunities for the use of NAMs, while also showing potential to introduce further flexibility in testing requirements to allow the maximum use of fit for purpose NAMs.

For example, it is important to provide continuing educational opportunities for regulators and stakeholders on the conditions under which application of a certain NAM is appropriate and on how data from that NAM is interpreted. Conferences and webinars, as mentioned in Section 2, are examples of such opportunities. There are also ongoing discussions on how to streamline and accelerate validation processes and gain scientific confidence in the use of robust NAMs, including an ongoing effort within ICCVAM to publish a guidance on this topic. Updating these processes is foundational to timely uptake of fit-for-purpose, reliable, and relevant NAMs ( van der Zalm et al., 2022 ). Also key to the advancement of NAMs is the opportunity to discuss proposed NAM testing strategies with the agency. This allows for the wise use of resources and ensures that data needs of the regulatory agencies are being addressed by the proposed approach. Each regulatory agency has varying ability and instructions on meeting with stakeholders to discuss proposed testing strategies, with some agencies (notably the EPA and HC’s NSP) strongly encouraging these meetings, resulting in examples of successful submissions. Additional measures to instate incentives, such as expedited review, would further facilitate innovation and the use of more modern, reliable NAMs.

In addition, national and international communication and collaboration within and across sectors and geographies is of the utmost importance to minimize duplicative efforts and efficiently advance the best science. Ultimately, regulatory frameworks that allow for the timely uptake of scientifically sound toxicology testing approaches will facilitate the global acceptance of NAMs and allow the best protection of human health.

Acknowledgments

The authors would like to thank Dr. John Gordon from CPSC for providing text for the section on consumer product and reviewing the manuscript, Drs. Cecilia Tan and Anna Lowit from EPA, Mike Rasenberg from ECHA, Dr. George Kass from EFSA, Dr. Alexandra Long and Joelle Pinsonnault Cooper from HC, Dr. Gilly Stoddart, Emily McIvor, and Anna van der Zalm from PSCI for reviewing parts of the manuscript.

1 The distinction between an order and a rule is that the former may be issued without following the procedural requirements of notice and comment rulemaking under the Administrative Procedure Act (5 U.S.C. §§500 et seq .), whereas the latter must comply with these requirements.

Author contributions

All authors contributed important intellectual content and helped in the conceptualization, writing and revisions of the article. All authors read and approved the final manuscript.

Conflict of interest

Author JH was employed by the company Corteva Agriscience. Authors CH and JM-H were employed by the company JT International SA. Authors EN and TS were employed by the law firm Bergeson & Campbell PC.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Author disclaimer

This report has been reviewed and cleared by the Office of Chemical Safety and Pollution Prevention of the US EPA.

The views expressed in this article are those of the authors and do not necessarily represent the views or policies of their respective employers. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.

  • Allen D. G., Rooney J., Kleinstreuer N. C., Lowit A. B., Perron M. (2021). Retrospective analysis of dermal absorption triple pack data . ALTEX 38 , 463–476. 10.14573/altex.2101121 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bal-Price A., Hogberg H., Crofton K. M., Daneshian M., FitzGerald R. E., Fritsche E., et al. (2018a). Recommendation on test readiness criteria for new approach methods in toxicology: Exemplified for developmental neurotoxicity . ALTEX 35 , 306–352. 10.14573/altex.1712081 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bal-Price A., Pistollato F., Sachana M., Bopp S. K., Munn S., Worth A. (2018b). Strategies to improve the regulatory assessment of developmental neurotoxicity (DNT) using in vitro methods . Toxicol. Appl. Pharmacol. 354 , 7–18. 10.1016/j.taap.2018.02.008 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Benfenati E., Carnesecchi E., Roncaglioni A., Baldin R., Ceriani L., Ciacci A., et al. (2020). Maintenance,update and further development of EFSA’s chemical hazards: OpenFoodTox 2.0 . EFSA Support 17 , 1–36. 10.2903/sp.efsa.2020.EN-1822 [ CrossRef ] [ Google Scholar ]
  • Benigni R., Laura Battistelli C., Bossa C., Giuliani A., Fioravanzo E., Bassan A., et al. (2019). Evaluation of the applicability of existing (Q)SAR models for predicting the genotoxicity of pesticides and similarity analysis related with genotoxicity of pesticides for facilitating of grouping and read across . EFSA Support . 16 , 1–221. 10.2903/sp.efsa.2019.EN-1598 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bhuller Y., Ramsingh D., Beal M., Kulkarni S., Gagne M., Barton-Maclaren T. S. (2021). Canadian regulatory perspective on next generation risk assessments for pest control products and industrial chemicals . Front. Toxicol. 3 , 748406. 10.3389/FTOX.2021.748406 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • CCA (2012). Integrating emerging technologies into chemical safety assessment . Ottawa: The Council of Canadian Academies. Council of Canadian Academies. Available at: https://cca-reports.ca/reports/integrating-emerging-technologies-into-chemical-safety-assessment . [ Google Scholar ]
  • Clippinger A. J., Raabe H. A., Allen D. G., Choksi N. Y., van der Zalm A. J., Kleinstreuer N. C., et al. (2021). Human-relevant approaches to assess eye corrosion/irritation potential of agrochemical formulations . Cutan. Ocul. Toxicol. 40 , 145–167. 10.1080/15569527.2021.1910291 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Corley R. A., Kuprat A. P., Suffield S. R., Kabilan S., Hinderliter P. M., Yugulis K., et al. (2021). New approach methodology for assessing inhalation risks of a contact respiratory cytotoxicant: Computational fluid dynamics-based aerosol dosimetry modeling for cross-species and in vitro comparisons . Toxicol. Sci. 182 , 243–259. 10.1093/toxsci/kfab062 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • CPSC (2022). Guidance for Industry and Test Method Developers: CPSC Staff Evaluation of Alternative Test Methods and Integrated Testing Approaches and Data Generated from Such Methods to Support FHSA Labeling Requirements . Rockville, MD: U.S. Consumer Product Safety Commission. Available at: https://downloads.regulations.gov/CPSC-2021-0006-0010/content.pdf . [ Google Scholar ]
  • Craig E., Lowe K., Akerman G., Dawson J., May B., Reaves E., et al. (2019). Reducing the need for animal testing while increasing efficiency in a pesticide regulatory setting: Lessons from the EPA office of pesticide programs’ hazard and science policy Council . Regul. Toxicol. Pharmacol. 108 , 104481. 10.1016/j.yrtph.2019.104481 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Crofton K. M., Mundy W. R. (2021). External scientific report on the interpretation of data from the developmental neurotoxicity in vitro testing assays for use in integrated approaches for testing and assessment . EFSA Support . 18 , 1–42. 10.2903/sp.efsa.2021.en-6924 [ CrossRef ] [ Google Scholar ]
  • Ctgb (n.d). Request for information vertebrates testing . Available at: https://english.ctgb.nl/plant-protection/types-of-application/request-for-information-vertebrates-testing/characteristics .
  • Dobreniecki S., Mendez E., Lowit A., Freudenrich T. M., Wallace K., Carpenter A., et al. (2022). Integration of toxicodynamic and toxicokinetic new approach methods into a weight-of-evidence analysis for pesticide developmental neurotoxicity assessment: A case-study with dl- and L-glufosinate . Regul. Toxicol. Pharmacol. 131 , 105167. 10.1016/j.yrtph.2022.105167 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dorne J.-L. C. M., Amzal B., Quignot N., Wiecek W., Grech A., Brochot C., et al. (2018). Reconnecting exposure, toxicokinetics and toxicity in food safety: OpenFoodTox and TKplate for human health, animal health and ecological risk assessment . Toxicol. Lett. 295 , S29. 10.1016/j.toxlet.2018.06.1128 [ CrossRef ] [ Google Scholar ]
  • EC (2006). Regulation (EC) No 1907/2006 of the European parliament and of the Council of 18 december 2006 concerning the registration, evaluation, authorisation and restriction of chemicals (REACH) . OJ L 396/1. [ Google Scholar ]
  • EC (2008a). Council regulation (EC) No 440/2008 of 30 may 2008 laying down test methods pursuant to regulation (EC) No 1907/2006 of the European parliament and of the Council on the registration, evaluation, authorisation and restriction of chemicals (REACH) . OJ L 142/1. [ Google Scholar ]
  • EC (2008b). Regulation (EC) No 1272/2008 of the European Parliament and of the Council of 16 December 2008 on classification, labelling and packaging of substances and mixtures, amending and repealing Directives 67/548/EEC and 1999/45/EC, and amending Regulation (EC) . OJ L 353/1. [ Google Scholar ]
  • EC (2009a). Regulation (EC) No 1107/2009 of the European Parliament and of the Council of 21 October 2009 concerning the placing of plant protection products on the market and repealing Council Directives 79/117/EEC and 91/414/EEC . OJ L 309/1. [ Google Scholar ]
  • EC (2009b). Regulation (EC) No 1223/2009 of the European parliament and of the Council of 30 november 2009 on cosmetic products . OJ L 342/59. [ Google Scholar ]
  • EC (2020). Chemicals strategy for sustainability - towards a toxic-free environment . Available at: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2020:667 .
  • ECHA (2014). Clarity on interface between REACH and the cosmetics regulation . Available at: https://echa.europa.eu/nl/-/clarity-on-interface-between-reach-and-the-cosmetics-regulation .
  • ECHA (2016a). How to use alternatives to animal testing to fulfil the information requirements for REACH registration Practical guide . 2nd ed. Helsinki: European Chemicals Agency. 10.2823/194297 [ CrossRef ] [ Google Scholar ]
  • ECHA (2016b). New approach methodologies in regulatory science . Proceedings of a scientific workshop. Helsinki: European Chemicals Agency. 10.2823/543644 [ CrossRef ] [ Google Scholar ]
  • ECHA (2017). Read-across assessment framework (RAAF) . [ Google Scholar ]
  • ECHA (2019). Registration dossier - N,N,4-trimethylpiperazine-1-ethylamine . Available at: https://echa.europa.eu/nl/registration-dossier/-/registered-dossier/27533/7/7/1 .
  • ECHA (2020). The use of alternatives to testing on animals for REACH . Fourth report under Article 117(3) of the REACH Regulation. Helsinki: European Chemicals Agency. 10.2823/092305 [ CrossRef ] [ Google Scholar ]
  • ECHA (2021). Skin sensitisation . European chemicals agency . Available at: https://echa.europa.eu/documents/10162/1128894/oecd_test_guidelines_skin_sensitisation_en.pdf/40baa98d-fc4b-4bae-a26a-49f2b0d0cf63?t=1633687729588 .
  • ECHA (n.d.a). Enforcement . Available at: https://echa.europa.eu/regulations/enforcement .
  • ECHA (n.d.b). Information requirements . Available at: https://echa.europa.eu/regulations/reach/registration/information-requirements .
  • ECHA (n.d.c). Registration . Available at: https://echa.europa.eu/regulations/reach/registration .
  • ECHA (n.d.d). Testing proposals . Available at: https://echa.europa.eu/information-on-chemicals/testing-proposals .
  • ECHA (n.d.e). The role of testing in CLP . Available at: https://echa.europa.eu/testing-clp .
  • ECHA (n.d.f). Understanding REACH . Available at: https://echa.europa.eu/regulations/reach/understanding-reach .
  • EFSA, and OECD (2017). Workshop Report on integrated approach for testing and assessment of developmental neurotoxicity . EFSA Support 1191 , 19. 10.2903/sp.efsa.2017.en-1191 [ CrossRef ] [ Google Scholar ]
  • EFSA, Buist H., Craig P., Dewhurst I., Hougaard Bennekou S., Kneuer C. (2017). Guidance on dermal absorption . EFSA J. 15 , e04873. 10.2903/j.efsa.2017.4873 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • EFSA PPR Panel, Hernández-Jerez A., Adriaanse P., Aldrich A., Berny P., Coja T. (2021). Development of Integrated Approaches to Testing and Assessment (IATA) case studies on developmental neurotoxicity (DNT) risk assessment . EFSA J. 19 . 10.2903/j.efsa.2021.6599 [ CrossRef ] [ Google Scholar ]
  • EFSA (2009). Existing approaches incorporating replacement, reduction and refinement of animal testing: Applicability in food and feed risk assessment . EFSA J. 7 , 1–63. 10.2903/j.efsa.2009.1052 [ CrossRef ] [ Google Scholar ]
  • EFSA (2019). EFSA Workshop on in vitro comparative metabolism studies in regulatory pesticide risk assessment . EFSA Support . 16 , 1–16. 10.2903/sp.efsa.2019.EN-1618 [ CrossRef ] [ Google Scholar ]
  • EFSA (2021). EFSA strategy 2027: Science, safe food, sustainability . Parma: Publications Office. 10.2805/886006 [ CrossRef ] [ Google Scholar ]
  • EFSA (2022). European stakeholders’ workshop on new approach methodologies (NAMs) for developmental neurotoxicity (DNT) and their use in the regulatory risk assessment of chemicals . Available at: https://www.efsa.europa.eu/en/events/european-stakeholders-workshop-new-approach-methodologies-nams-developmental-neurotoxicity .
  • EPA (1994). Estimating toxicity of industrial chemicals to aquatic organisms using structure activity relationships . 2nd ed. Washington, DC: U.S. Environmental Protection Agency. EPA-R93-001. [ Google Scholar ]
  • EPA (2009). The U.S. Environmental protection agency’s strategic plan for evaluating the toxicity of chemicals . Washington, DC: U.S. Environmental Protection Agency. [ Google Scholar ]
  • EPA (2011). Integrated Approaches to Testing and assessment strategy : Use of new Computational and molecular tools . FIFRA scientific advisory panel, Office of pesticide programs. Washington, DC: U.S. Environmental Protection Agency. EPA-HQ-OPP-2011-0284-0006. Available at: https://www.regulations.gov/document/EPA-HQ-OPP-2011-0284-0006 [ Google Scholar ]
  • EPA (2012). Guidance for waiving or bridging of mammalian acute toxicity tests for pesticides and pesticide products (acute oral, acute dermal, acute inhalation, primary eye, primary dermal, and dermal sensitization) . Office of Pesticide Programs. Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/default/files/documents/acute-data-waiver-guidance.pdf [ Google Scholar ]
  • EPA (2013a). Guiding principles for data requirements . Office of Pesticide Programs. Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/production/files/2016-01/documents/data-require-guide-principle.pdf [ Google Scholar ]
  • EPA (2013b). Part 158 toxicology data requirements : Guidance for neurotoxicity battery, subchronic inhalation, subchronic dermal and immunotoxicity studies . Office of Pesticide Programs, Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/default/files/2014-02/documents/part158-tox-data-requirement.pdf [ Google Scholar ]
  • EPA (2015). Use of an alternate testing framework for classification of eye irritation potential of EPA pesticide products . Office of Pesticide Programs. Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/production/files/2015-05/documents/eye_policy2015update.pdf [ Google Scholar ]
  • EPA (2016a). Guidance for waiving acute dermal toxicity tests for pesticide formulations & supporting retrospective analysis . Office of Pesticide Programs. Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/default/files/2016-11/documents/acute-dermal-toxicity-pesticide-formulations_0.pdf [ Google Scholar ]
  • EPA (2016b). Process for evaluating & implementing alternative approaches to traditional in vivo acute toxicity studies for FIFRA regulatory use . Available at: https://www.epa.gov/sites/default/files/2016-03/documents/final_alternative_test_method_guidance_2-4-16.pdf .
  • EPA (2018a). FIFRA scientific advisory panel; notice of public meeting: Evaluation of a proposed approach to refine inhalation risk assessment for point of contact toxicity . Available at: https://www.regulations.gov/docket/EPA-HQ-OPP-2018-0517 .
  • EPA (2018b). Interim science policy: Use of alternative approaches for skin sensitization as a replacement for laboratory animal testing . Office of Chemical Safety and Pollution Prevention. Washington, DC: U.S. Environmental Protection Agency. EPA-HQ-OPP-2016-0093-0090. Available at: https://www.epa.gov/pesticides/epa-releases-draft-policy-reduce-animal-testing-skin-sensitization (Accessed April 4, 2018). [ Google Scholar ]
  • EPA (2018c). List of alternative test methods and strategies (or new approach methodologies [NAMs]). Office of pollution prevention and toxics . Washington, DC: U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/production/files/2018-06/documents/alternative_testing_nams_list_june22_2018.pdf .
  • EPA (2018d). Strategic plan to promote the development and implementation of alternative test methods within the TSCA program. U.S. Environmental protection agency. EPA-740-R1-8004 . Available at: https://www.epa.gov/sites/default/files/2018-06/documents/epa_alt_strat_plan_6-20-18_clean_final.pdf .
  • EPA (2019a). First annual conference on the state of the science on development and use of new approach methods (NAMs) for chemical safety testing . Available at: https://www.epa.gov/chemical-research/first-annual-conference-state-science-development-and-use-new-approach-methods-0 .
  • EPA (2019b). List of alternative test methods and strategies (or new approach methodologies [NAMs]) . First Update: December 5th, 2019. Office of Pollution Prevention and Toxics, U.S. Environmental Protection Agency. Available at: https://www.epa.gov/sites/production/files/2019-12/documents/alternative_testing_nams_list_first_update_final.pdf .
  • EPA (2019c). Significant new use Rules on certain chemical substances . Action . Washington, DC: Final Rule, Agency: Environmental Protection Agency. 84 FR 13531 (April 5, 2019) (to be codified at 40 CFR 9 and 721). [ Google Scholar ]
  • EPA (2020a). Annual reports on PRIA implementation . Available at: https://www.epa.gov/pria-fees/annual-reports-pria-implementation .
  • EPA (2020b). EPA conference on the state of science on development and use of NAMs for chemical safety testing . Available at: https://www.epa.gov/chemical-research/epa-conference-state-science-development-and-use-nams-chemical-safety-testing#1 .
  • EPA (2020c). Guidance for waiving acute dermal toxicity tests for pesticide technical chemicals & supporting retrospective analysis . Office of Pesticide Programs. U.S. Environmental Protection Agency. EPA 705-G-2020-3722. Available at: https://www.epa.gov/sites/default/files/2021-01/documents/guidance-for-waiving-acute-dermal-toxicity.pdf . [ Google Scholar ]
  • EPA (2020d). Hazard characterization of isothiazolinones in support of FIFRA registration review . Available at: https://www.regulations.gov/document/EPA-HQ-OPP-2013-0605-0051 .
  • EPA (2020e). New approach methodologies (NAMs) factsheet . Available at: https://www.epa.gov/sites/default/files/2020-07/documents/css_nams_factsheet_2020.pdf .
  • EPA (2020f). New approach methods and reducing the use of laboratory animals for chronic and carcinogenicity testing . Available at: https://yosemite.epa.gov/sab/sabproduct.nsf/LookupWebProjectsCurrentBOARD/2D3E04BC5A34DCDE8525856D00772AC1?OpenDocument .
  • EPA (2020g). Pesticide registration review; draft human Health and ecological risk Assessments for several Pesticides for several isothiazolinones . Action. Notice, Agency: Washington, DC: Environmental Protection Agency. 85 FR 28944 (May 14, 2020). [ Google Scholar ]
  • EPA (2020h). Revocation of significant new use Rule for a certain chemical substance (P-16-581) . Action: Proposed rule . Agency: Environmental Protection Agency. 85 FR 52274 (Aug. 25, 2020) (to be codified at 40 CFR 721). [ Google Scholar ]
  • EPA (2020i). The use of new approach methodologies (NAMs) to derive extrapolation factors and evaluate developmental neurotoxicity for human health risk assessment . Available at: https://www.regulations.gov/document/EPA-HQ-OPP-2020-0263-0033 .
  • EPA (2021a). Accelerating the pace of chemical risk assessment (APCRA) . Available at: https://www.epa.gov/chemical-research/accelerating-pace-chemical-risk-assessment-apcra .
  • EPA (2021b). Adopting 21st-century science methodologies—metrics . Available at: https://www.epa.gov/pesticide-science-and-assessing-pesticide-risks/adopting-21st-century-science-methodologies-metrics .
  • EPA (2021c). Chlorothalonil: Revised human health draft risk assessment for registration review . Available at: https://www.regulations.gov/document/EPA-HQ-OPP-2011-0840-0080 .
  • EPA (2021d). List of alternative test methods and strategies (or new approach methodologies [NAMs]) . Second Update: February 4th, 2021. Office of Pollution Prevention and Toxics, U.S. Environmental Protection Agency . Available at: https://www.epa.gov/sites/default/files/2021-02/documents/nams_list_second_update_2-4-21_final.pdf .
  • EPA (2021e). New approach methods work plan (v2) . U.S. Environmental Protection Agency. EPA/600/X-21/209. Available at: https://www.epa.gov/system/files/documents/2021-11/nams-work-plan_11_15_21_508-tagged.pdf .
  • EPA (2021f). Order under section 4(a)(2) of the toxic substances control Act - 1,1,2-trichlorethane . Available at: https://www.epa.gov/sites/default/files/2021-01/documents/tsca_section_4a2_order_for_112-trichloroethane_on_ecotoxicity_and_occupational_exposure.pdf .
  • EPA (2022a). Strategic vision for adopting new approach methodologies . Available at: https://www.epa.gov/pesticide-science-and-assessing-pesticide-risks/strategic-vision-adopting-new-approach .
  • EPA (2022b). TSCA new chemicals collaborative research effort 3-9-22 clean docket version . Available at: https://www.regulations.gov/document/EPA-HQ-OPPT-2022-0218-0004 .
  • EPA (2022c). TSCA section 4 test orders . Available at: https://www.epa.gov/assessing-and-managing-chemicals-under-tsca/tsca-section-4-test-orders .
  • Escher S. E., Partosch F., Konzok S., Jennings P., Luijten M., Kienhuis A. (2022). Development of a roadmap for action on new approach methodologies in risk assessment . EFSA Support 19 . 10.2903/sp.efsa.2022.EN-7341 [ CrossRef ] [ Google Scholar ]
  • EU (2010). Directive 2010/63/EU of the European Parliament and of the Council of 22 September 2010 on the protection of animals used for scientific purposes . Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32010L0063 .
  • EU (2012). Regulation (Eu) No 528/2012 of the European Parliament and of the Council of 22 May 2012 concerning the making available on the market and use of biocidal products . OJL 167/1. [ Google Scholar ]
  • EU (2013a). Commission Communication in the framework of the implementation of Commission Regulation (EU) No 283/2013 of 1 March 2013 setting out the data requirements for active substances, in accordance with Regulation (EC) No 1107/2009 of the European Parliament a . OJ C 95/1. [ Google Scholar ]
  • EU (2013b). Commission communication in the framework of the implementation of Commission Regulation (EU) No 284/2013 of 1 March 2013 setting out the data requirements for plant protection products, in accordance with Regulation (EC) No 1107/2009 of the European Parl . OJ C 95/21. [ Google Scholar ]
  • EU (2013c). Commission Regulation (EU) No 283/2013 of 1 March 2013 setting out the data requirements for active substances, in accordance with Regulation (EC) No 1107/2009 of the European Parliament and of the Council concerning the placing of plant protection produc . OJ L 93/1. [ Google Scholar ]
  • EU (2013d). Commission Regulation (EU) No 284/2013 of 1 March 2013 setting out the data requirements for plant protection products, in accordance with Regulation (EC) No 1107/2009 of the European Parliament and of the Council concerning the placing of plant protectio . OJ L 93/85. [ Google Scholar ]
  • EURL ECVAM (n.d). Tsar - tracking system for alternative methods towards regulatory acceptance . Available at: https://tsar.jrc.ec.europa.eu/ .
  • Fritsche E., Crofton K. M., Hernandez A. F., Hougaard Bennekou S., Leist M., Bal-Price A., et al. (2017). OECD/EFSA workshop on developmental neurotoxicity (DNT): The use of non-animal test methods for regulatory purposes . ALTEX 34 , 311–315. 10.14573/altex.1701171 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hamm J., Allen D., Ceger P., Flint T., Lowit A., O’Dell L., et al. (2021). Performance of the GHS mixtures equation for predicting acute oral toxicity . Regul. Toxicol. Pharmacol. 125 , 105007. 10.1016/j.yrtph.2021.105007 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • HC (2006). Use site category (DACO tables) . Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/pesticides-pest-management/registrants-applicants/product-application/use-site-category-daco-tables.html .
  • HC (2013a). Guidance for waiving or bridging of mammalian acute toxicity tests for pesticides . The Health Canada Pest Management Regulatory Agency. Available at: https://www.canada.ca/content/dam/hc-sc/migration/hc-sc/cps-spc/alt_formats/pdf/pubs/pest/pol-guide/toxicity-guide-toxicite/toxicity-guide-toxicite.eng.pdf .
  • HC (2013b). Use-site category (USC) definitions for conventional chemical pesticides . Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/pesticides-pest-management/registrants-applicants/product-application/use-site-category-daco-tables/definitions-conventional-chemical-pesticides.html .
  • HC (2016a). Chemicals management plan risk assessment Toolbox,” in fact sheet series: Topics in risk assessment of substances under the Canadian Environmental Protection Act, 1999 . ( CEPA 1999) Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/fact-sheets/chemicals-management-plan-risk-assessment-toolbox.html .
  • HC (2016b). Fact sheet series: Topics in risk assessment of substances under the Canadian Environmental Protection Act, 1999 (CEPA 1999) . Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/canada-approach-chemicals/risk-assessment.html .
  • HC (2016c). Science approach document: Threshold of toxicological concern (TTC)-based approach for certain substances . Available at: https://www.ec.gc.ca/ese-ees/326E3E17-730A-4878-BC25-D07303A4DC13/HC TTC SciAD EN 2017-03-23.pdf .
  • HC (2016d). Strategic plan 2016-2021. The health Canada pest management regulatory agency . Available at: https://www.canada.ca/content/dam/hc-sc/migration/hc-sc/cps-spc/alt_formats/pdf/pubs/pest/corp-plan/strat-plan/strat-plan-eng.pdf .
  • HC (2017). Acute dermal toxicity study waiver. Science policy note SPN2017-03 . The Health Canada Pest Management Regulatory Agency. Available at: https://www.canada.ca/content/dam/hc-sc/documents/services/consumer-product-safety/reports-publications/pesticides-pest-management/policies-guidelines/science-policy-notes/2017/acute-dermal-toxicity-waiver-spn2017-03-eng.pdf .
  • HC (2018). The rapid screening approach . Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/chemicals-management-plan/initiatives/rapid-screening-approach-chemicals-management-plan.html .
  • HC (2019). Evaluation of the use of toxicogenomics in risk assessment at health Canada: An exploratory document on current health Canada practices for the use of toxicogenomics in risk assessment . Health Canada: Toxicogenomics Working Group. Available at: https://www.canada.ca/en/health-canada/services/publications/science-research-data/evaluation-use-toxicogenomics-risk-assessment.html [ Google Scholar ]
  • HC (2020). 2019-2020 RCC work plan: Pesticides . Available at: https://www.canada.ca/en/health-canada/corporate/about-health-canada/legislation-guidelines/acts-regulations/canada-united-states-regulatory-cooperation-council/work-plan-crop-protection-2019-2020.html .
  • HC (2021a). A framework for risk assessment and risk management of pest control products . The Health Canada Pest Management Regulatory Agency. Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/reports-publications/pesticides-pest-management/policies-guidelines/risk-management-pest-control-products.html .
  • HC (2021b). Background paper: Evolution of the existing substances risk assessment program under the Canadian environmental protection Act, 1999. CMP science committee . Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/chemicals-management-plan/science-committee/meeting-records-reports/background-paper-evolution-existing-substances-risk-assessment-program-canadian-environmental-protection-act-1999.html .
  • HC (2021c). Guidance document for the notification and testing of new chemicals and polymers . Available at: https://www.canada.ca/en/environment-climate-change/services/managing-pollution/evaluating-new-substances/chemicals-polymers/guidance-documents/guidelines-notification-testing.html .
  • HC (2021d). Guidance for developing datasets for conventional pest control product applications . data codes for parts 1, 2, 3 , 4, 5, 6, 7 and 10. Updated 2021. The Health Canada Pest Management Regulatory Agency. Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/reports-publications/pesticides-pest-management/policies-guidelines/guidance-developing-applications-data-codes-parts-1-2-3-4-5-6-7-10.html .
  • HC (2021e). Pest Management Regulatory Agency (PMRA) 2019-2020 annual report . Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/reports-publications/pesticides-pest-management/corporate-plans-reports/annual-report-2019-2020.html .
  • HC (2021f). Science approach document: Bioactivity exposure ratio : Application in priority setting and risk assessment. 1–58 . Available at: https://www.canada.ca/en/environment-climate-change/services/evaluating-existing-substances/science-approach-document-bioactivity-exposure-ratio-application-priority-setting-risk-assessment.html .
  • HC (2022a). Approaches for addressing data needs in risk assessment,” in Fact sheet series: Topics in risk assessment of substances under the Canadian Environmental Protection Act, 1999 . CEPA 1999) Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/fact-sheets/approaches-data-needs-risk-assessment.html .
  • HC (2022b). Pest management regulatory agency (PMRA) 2020-2021 annual report . Available at: https://www.canada.ca/en/health-canada/services/consumer-product-safety/reports-publications/pesticides-pest-management/corporate-plans-reports/annual-report-2020-2021.html [ Google Scholar ]
  • HC (2022c). Science approach documents . Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/science-approach-documents.html .
  • HC (2022d). The risk assessment process for existing substances . Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/canada-approach-chemicals/risk-assessment.html#s3 .
  • HC (2022e). Chemicals management plan . Available at: https://www.canada.ca/en/health-canada/services/chemical-substances/chemicals-management-plan.html .
  • HC (2022f). New substances program . Available at: https://www.canada.ca/en/environment-climate-change/services/managing-pollution/evaluating-new-substances.html .
  • Hilton G. M., Adcock C., Akerman G., Baldassari J., Battalora M., Casey W., et al. (2022). Rethinking chronic toxicity and carcinogenicity assessment for agrochemicals project (ReCAAP): A reporting framework to support a weight of evidence safety assessment without long-term rodent bioassays . Regul. Toxicol. Pharmacol. 131 , 105160. 10.1016/j.yrtph.2022.105160 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • HSE (n.d). Vertebrate testing (toxicology). UK health and safety executive . Available at: https://www.hse.gov.uk/pesticides/pesticides-registration/applicant-guide/vertebrate-testing.htm .
  • ICCVAM (2018). A strategic roadmap for establishing new approaches to evaluate the safety of chemicals and medical products in the United States . 10.22427/NTP-ICCVAM-ROADMAP2018 [ CrossRef ] [ Google Scholar ]
  • Ingenbleek L., Lautz L. S., Dervilly G., Darney K., Astuto M. C., Tarazona J., et al. (2020). “ Risk assessment of chemicals in food and feed: Principles, applications and future perspectives ,” in Environmental pollutant exposures and public health . Editor Harrison R. M., 1–38. 10.1039/9781839160431-00001 [ CrossRef ] [ Google Scholar ]
  • ITA (n.d). U.S.-Canada regulatory cooperation Council. International trade administration . Available at: https://www.trade.gov/rcc .
  • KEMI (2021). Data protection for test and study reports . Stockholm: Swedish Chemicals Agency. Available at: https://www.kemi.se/en/pesticides-and-biocides/plant-protection-products/apply-for-authorisation-for-plant-protection-products/data-protection . [ Google Scholar ]
  • Knight J., Rovida C., Kreiling R., Zhu C., Knudsen M., Hartung T. (2021). Continuing animal tests on cosmetic ingredients for REACH in the EU . ALTEX 38 , 653–668. 10.14573/ALTEX.2104221 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Krewski D., Andersen M. E., Tyshenko M. G., Krishnan K., Hartung T., Boekelheide K., et al. (2020). Toxicity testing in the 21st century: Progress in the past decade and future perspectives . Arch. Toxicol. 94 , 1–58. 10.1007/s00204-019-02613-4 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ladics G. S., Price O., Kelkar S., Herkimer S., Anderson S. (2021). A weight-of-the-evidence approach for evaluating, in lieu of animal studies, the potential of a novel polysaccharide polymer to produce lung overload . Chem. Res. Toxicol. 34 , 1430–1444. 10.1021/acs.chemrestox.0c00301 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lanzoni A., Castoldi A. F., Kass G. E., Terron A., De Seze G., Bal‐Price A., et al. (2019). Advancing human health risk assessment . EFSA J. 17 , e170712. 10.2903/j.efsa.2019.e170712 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Linke B., Mohr S., Ramsingh D., Bhuller Y. (2017). A retrospective analysis of the added value of 1-year dog studies in pesticide human health risk assessments . Crit. Rev. Toxicol. 47 , 581–591. 10.1080/10408444.2017.1290044 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luechtefeld T., Maertens A., Russo D. P., Rovida C., Zhu H., Hartung T. (2016a). Analysis of Draize eye irritation testing and its prediction by mining publicly available 2008-2014 REACH data . ALTEX 33 , 123–134. 10.14573/ALTEX.1510053 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luechtefeld T., Maertens A., Russo D. P., Rovida C., Zhu H., Hartung T. (2016b). Analysis of public oral toxicity data from REACH registrations 2008-2014 . ALTEX - Altern. Anim. Exp. 33 , 111–122. 10.14573/ALTEX.1510054 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luechtefeld T., Maertens A., Russo D. P., Rovida C., Zhu H., Hartung T. (2016c). Analysis of publically available skin sensitization data from REACH registrations 2008-2014 . ALTEX 33 , 135–148. 10.14573/altex.1510055 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luechtefeld T., Marsh D., Rowlands C., Hartung T. (2018). Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility . Toxicol. Sci. 165 , 198–212. 10.1093/TOXSCI/KFY152 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McGee Hargrove M., Parr-Dobrzanski B., Li L., Constant S., Wallace J., Hinderliter P., et al. (2021). Use of the MucilAir airway assay, a new approach methodology, for evaluating the safety and inhalation risk of agrochemicals . Appl. Vitro Toxicol. 7 , 50–60. 10.1089/aivt.2021.0005 [ CrossRef ] [ Google Scholar ]
  • NAFTA TWG (2016). NAFTA TWG five-year strategy 2016 – 2021. The North American free trade agreement’s (NAFTA) technical working group (TWG) on pesticides . Available at: https://www.canada.ca/content/dam/hc-sc/migration/hc-sc/cps-spc/alt_formats/pdf/pubs/pest/corp-plan/nafta-alena-2016-2021/nafta-strategy-2016-2021-eng.pdf .
  • NICEATM (2021). Alternative methods accepted by US agencies . Available at: https://ntp.niehs.nih.gov/whatwestudy/niceatm/accept-methods/index.html?utm_source=direct&utm_medium=prod&utm_campaign=ntpgolinks&utm_term=regaccept .
  • NRC (2007). Toxicity testing in the 21st century: A vision and a strategy . Washington, DC: National Research Council. The National Academies Press. 10.17226/11970 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • OECD (2004). Test No. 428: Skin absorption: In vitro method . OECD Guidelines for the Testing of Chemicals, Section 4 (Paris: OECD Publishing; ). 10.1787/9789264071087-en [ CrossRef ] [ Google Scholar ]
  • OECD (2017). Guidance document on considerations for waiving or bridging of mammalian acute toxicity tests . OECD Series on Testing and Assessment. Paris: OECD Publishing, 237. 10.1787/9789264274754-en [ CrossRef ] [ Google Scholar ]
  • OECD (2019). Decision of the Council concerning the mutual acceptance of data in the assessment of chemicals . OECD/LEGAL/0194 . Available at: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0194 .
  • OECD (2021a). Guideline No. 497: Defined approaches on skin sensitisation . OECD Guidelines for the Testing of Chemicals, Section 4. Paris: OECD Publishing. 10.1787/b92879a4-en [ CrossRef ] [ Google Scholar ]
  • OECD (2021b). Work plan for the test guidelines programme (TGP) - as of July 2021 . Available at: https://www.oecd.org/env/ehs/testing/work-plan-test-guidelines-programme-july-2021.pdf .
  • Paul Friedman K., Gagne M., Loo L.-H., Karamertzanis P., Netzeva T., Sobanski T., et al. (2020). Utility of in vitro bioactivity as a lower bound estimate of in vivo adverse effect levels and in risk-based prioritization . Toxicol. Sci. 173 , 202–225. 10.1093/toxsci/kfz201 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • PSCI (n.d.). Webinar series on the use of new approach methodologies (NAMs) in risk assessment. PETA Science Consortium international . Available at: https://www.thepsci.eu/nam-webinars/ .
  • Republika Slovenija (2022). Vloga za consko registracijo fitofarmacevtskega sredstva (FFS) . Available at: https://www.gov.si/zbirke/storitve/vloga-za-notifikacijo-vloge-za-consko-registracijo-fitofarmacevtskih-sredstev/ .
  • Sachana M., Bal-Price A., Crofton K. M., Bennekou S. H., Shafer T. J., Behl M., et al. (2019). International regulatory and scientific effort for improved developmental neurotoxicity testing . Toxicol. Sci. 167 , 45–57. 10.1093/toxsci/kfy211 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Simmons S. O., Scarano L. (2020). Identification of new approach methodologies (NAMs) for placement on the TSCA 4(h)(2)(C) list: A proposed NAM nomination form . Presentation at PSCI Webinar Series on the Use of New Approach Methodologies (NAMs) in Risk Assessment. Available at: https://www.thepsci.eu/wp-content/uploads/2020/09/Simmons_Identification-of-New-Approach-Methodologies-NAMs.pdf .
  • SZU (n.d.). Information for applicant - vertebrate studies . Vinohrady: The Czech National Institute of Public Health. Available at: http://www.szu.cz/topics/information-for-applicant-vertebrate-studies?lang=2 [ Google Scholar ]
  • US EPA, OCSPP, and OPP (2016). US EPA - guidance for waiving acute dermal toxicity tests for pesticide formulations & supporting retrospective analysis . Available at: https://www.epa.gov/pesticide-registration/bridging-or-waiving-data-requirements%0Ahttps://www.epa.gov/sites/production/files/2016-11/documents/acute-dermal-toxicity-pesticide-formulations_0.pdf .
  • van der Zalm A. J., Barroso J., Browne P., Casey W. M., Gordon J., Henry T. R., et al. (2022). A framework for establishing scientific confidence in new approach methodologies . Arch. Toxicol. 10.1007/s00204-022-03365-4 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Mote partners with combat wounded veterans on novel approach to coral reef restoration

Focus in 'common garden' test shifts to massive, slow growing corals to build back reefs the same way the evolved naturally; scientists also prep for another underwater heatwave.

new methodology approach

Scientists with Mote Marine Laboratory & Aquarium partnered with 13 veterans affiliated with Combat Wounded Veterans Challenge this month to plant slower-growing boulder corals that provide the framework for a healthy coral reef ecosystem in the Florida Keys.

“Most people when they think of coral  they think of those branching corals… the elkhorn, the staghorn, but in point of fact, from an ecological and evolutionary ecological perspective, those branching coral are the species that would come in after the basics of a coral reef has been established through an evolutionary process,” said Mote president & CEO Dr. Michael Crosby.

“With these combat wounded vets, we did focus on outplanting these massive slow growers,” he later added. “We have a high degree of confidence that this could be a huge, major advancement in the whole strategy of coral restoration, where everyone has been focusing – including us – on these fast growing, very beautiful branching corals.”

Mote's 13-year partnership with veterans group

This marked the 13th year that Mote has partnered with the 14-year-old St. Petersburg-based national nonprofit that provides wounded veterans challenging and inspirational ways to reintegrate with civilian life. In turn, data collected while the veterans are on their missions helps medical professionals advance research in everything from traumatic brain injuries and post-traumatic stress to development of orthotics and prosthetics.

Some of the waterproof prosthetics used by the veterans had been developed from research conducted in previous joint missions with Mote and CWVC.

Crosby said this partnership is important to him because his father, uncle and younger brother were career military.

“Now, together, we’re on a mission to help restore and bring back our coral reefs,” Crosby said. “It’s just an amazing partnership when you see their dedication and their perseverance and real positive can-do mission driven attitude.

“It’s so rewarding to be part of that mission with them.”

Mote partnership with CWVC sets a record

The veterans, along with Mote scientists, created 4,538 coral microfragments – including a record 2,003 microfragments in one day – and set 30 new anchors for coral trees in Mote’s coral nursery at Looe Key.

Dr. Jason Spadaro, Mote’s Coral Reef Restoration Research Program Manager, Crosby, and other Mote scientists joined the veterans on the reef to outplant 1,234 corals in one day.

“They’re amazing, incredible guys,” Spadaro said. “Most of them were single or double amputees and they were running circles around us.

“That was probably one of the most productive, impactful and positive partnerships that we’ve had – at last since I’ve been at Mote.”

A larger group from Combat Wounded Veterans Challenge will return to the  Elizabeth Moore International Center for Coral Reef Research & Restoration on Summerland Key in June, along with a group of teens enrolled in the Palm Harbor-based SCUBAnauts International marine education program for the ongoing coral restoration.

Underwater heatwave stalls Mission: Iconic Reefs

Mote is one of seven partners working with the National Oceanic and Atmospheric Administration on Mission: Iconic Reefs , a $100 million plan to restore 3 million square feet of coral on seven iconic reefs in the Florida Reef Tract.

The National Oceanic and Atmospheric Administration projects the value of the reef along southeast Florida at $8.5 billion, with more than 70,000 full and part-time jobs generated.

The mission, started in December 2019, suffered a major setback in 2023, when an unprecedented summer heatwave prompted massive coral bleaching.

Staghorn and elkhorn coral were particularly impacted by warm water and subsequent bleaching in a historic climate event that drew international attention after an underwater sensor in Florida Bay recorded a temperature of 101.5 degrees on July 24.

The mission goal is still to increase coral cover at seven sites – the Carysfort Reef Complex and Horseshoe Reef in the upper keys; Cheeca Rocks and Sombrero Key in the middle keys; and Looe Key, Newfound Harbor Patches and Eastern Dry Docks in the lower keys.

A goal of 25% coral cover by the year 2035

Scientists are working to develop corals that will be resilient to warming oceans, ocean acidification – caused by increased levels of carbon dioxide – disease and other factors.

Healthy corals that retain their zooxanthellae algae create calcium carbonate, or limestone, which makes up a coral reef. Coral reefs provide habitat for almost 25% of life in the ocean and protection from storms for homes along the coast.

Once resilient corals are identified, the technique of micro-fragmentation, a process that capitalizes on the natural healing process and allows corals to grow more than 25 times faster than normal – with some larger boulder corals growing up to 50 times faster – can be used to accelerate their growth.

That would increase cover on those sites from an estimated 2% prior to the 2023 bleaching event to 25% by 2035.

Mote is working to restore three additional reefs: American Shoal offshore between Sugarloaf Key and the Saddlebunch Keys; Coffins Patch, a shallow reef southeast of Bamboo Key near Marathon, and Ham Reef off of Islamorada.

Testing a method to build an underwater community 

The restoration effort the veterans and SCUBAnauts are assisting on is not technically part of Mission: Iconic Reefs, Spadaro said, but a parallel effort.

“We’re taking kind of an experimental approach where we’re looking at a common garden design, where you have multiple species and multiple genets within those species, all in the same area,” Spadaro said.

Over time, skeletons of dead coral erode. That’s part of the natural life cycle on a living coral reef, too, but without new corals coming in, the reef flattens.

In those “common gardens,” Mote scientists would outplant a variety of corals – branching coral species that provide habitat for fish as well – and monitor the results in comparison to a control plot.

The goal is to determine whether more intense and diverse restoration efforts are slowing the erosion rate, stabilizing the remaining reef structure or increasing the net accretion of calcium caarbonate.

The hope is that the build-up of coral will allow other species to grow, aiming to restore "whole ecosystems rather than just coral populations,” Spadaro said. “And by doing that, are we jump-starting fish communities being more dense and rich in those areas?

“Very soon we’ll be adding herbivores to that equation,” he added, referring to Caribbean King crabs meant to control nuisance algae. “Does that facilitate cascading efforts … so we push the reef over that tipping point and it takes over recovery on its own?”

Preparing for another underwater heatwave

In Late April, NOAA confirmed that the world is currently experiencing its fourth global coral bleaching event and second in the last 10 years.

"From February 2023 to April 2024, significant coral bleaching has been documented in both the Northern and Southern Hemispheres of each major ocean basin," Dr. Derek Manzello, coordinator of NOAA’s Coral Reef Watch, said in a news release.

Coral scientists learned several lessons du ring the summer of 2023 bleaching event and could conduct an evacuation similar to the one last year.

Crosby noted that Mote has expanded capacity at land-based nurseries to accommodate another 20,000 corals, should another heat-induced bleaching event occur.

“We learned a lot last year – I consider it a successful mission last year – the evacuation and return – putting the corals back in the water but we learned a heck of a lot as well.”

As of late this month, Crosby said water temperatures at some of the reef sites in the Keys are already approaching some of the highest temperatures for this time of year.

Coral bleaching can occur if ocean temperature is higher than the maximum monthly average by as little as 2 to 3 degrees Fahrenheit, or 1 to 2 degrees Celsius,

If ocean temperatures are higher than the maximum monthly average for a month or more, especially during the warmest part of the year, corals will experience bleaching.

“We’re within a half of a degree of the previous high, so the question is, are we going to continue to accelerate?” Crosby said. “Very, very, very importantly, is Florida Bay water going to heat up?”

Florida Bay is where that 101.5 degree temperature  reading occurred. Crosby described it as a huge shallow bathtub, where water can evaporate as it gets hot – increasing the salinity and lowering oxygen levels.

That can prompt organisms to die, decompose and reduce the oxygen level even more.

“And when that water goes through those passes out of Florida Bay, that’s an enormous one, two, three punch to the corals,” Crosby said. “We’re hoping we don’t see that but I think we're better prepared to throw all of our resources at an evacuation if it's needed.”

A plan to weather the next underwater heatwave

Should another heat wave occur, vulnerable Acroporid corals – like staghorn and elkhorn corals – would be relocated to Mote's Upper Keys underwater coral nurseries and monitored.

The massive form – boulder, brain and star – coral species will remain in underwater nurseries though if signs of significant thermal stress occurs, they could be moved to among Mote’s four underwater coral nurseries throughout the Florida Keys or brought in to land to ride out the heatwave in one of Mote’s three state-of-the-art land-based coral nursery facilities in the Florida Keys.

Should corals need to be evacuated farther north, Mote expanded the infrastructure at the Mote Aquaculture Research Park in Sarasota and can mobilize more than 50 additional staff and several research vessels – though evacuation of corals outside of the Keys is still considered an extreme measure, since that too can be stressful.

“Pulling them out is very stressful not just on the corals but on the people,” Spadaro said. “So if we can move them up into the nurseries that did very well last year – Key Largo in particular – that moves them out of that thermal stress but with the massive formed corals we’re very confident we can be a little more reactive with those.

“We’re leaving them where they are and prepped to pull them out if we need to.”

Estimation of Scattering Centers by Improved Two-Dimensional ESPRIT Type Method via a TLS Approach*

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. Top 20 Project Management Methodologies For 2020 (UPDATED)

    new methodology approach

  2. 15 Research Methodology Examples (2024)

    new methodology approach

  3. Types of Research Methodology: Uses, Types & Benefits

    new methodology approach

  4. INSIGHTS: The Guthrie-Jensen Blog Top Project Management Approaches

    new methodology approach

  5. Our Consulting Approach & Methodology

    new methodology approach

  6. Example Of Methodology In Research / Chapter 4

    new methodology approach

VIDEO

  1. The scientific approach and alternative approaches to investigation

  2. Bankers DA Methodology from Feb 24

  3. Difference Approach Method Strategy ll B.El.Ed ll B.Ed ll M.Ed ll 2022 II

  4. 15) Methodology/Approach #15

  5. 2) Methodology/Approach #2

  6. 7) Methodology/Approach #7

COMMENTS

  1. EPA Releases Updated New Approach Methodologies (NAMs) Work Plan

    For years, EPA scientists have been using cutting edge New Approach Methodologies (NAMs) to reduce the use of animals and revolutionize chemical testing. NAMs refer to any technologies, methodologies, approaches, or combinations thereof that can be used to provide information on chemical hazard and potential human exposure that can avoid or ...

  2. A framework for establishing scientific confidence in new approach

    New approach methodologies (NAMs) are defined as any technology, methodology, approach, or combination that can provide information on chemical hazard and risk assessment, and avoid the use of animals, and may include in silico, in chemico, in vitro, and ex vivo approaches (European Chemicals Agency 2016; U.S. Environmental Protection Agency ...

  3. Use of new approach methodologies (NAMs) to meet regulatory

    New approach methodologies (NAMs) are increasingly being used for regulatory decision making by agencies worldwide because of their potential to reliably and efficiently produce information that is fit for purpose while reducing animal use. This article summarizes the ability to use NAMs for the assessment of human health effects of industrial chemicals and pesticides within the United States ...

  4. New Approach Methods (NAMs) for Human Health Risk Assessment

    Suggested Citation:"New Approach Methods (NAMs) for Human Health Risk Assessment: Proceedings of a Workshop - in Brief." National Academies of Sciences, Engineering, and Medicine. 2022. New Approach Methods (NAMs) for Human Health Risk Assessment: Proceedings of a Workshop-in Brief. Washington, DC: The National Academies Press. doi: 10.17226 ...

  5. Theme (Concept) Paper ‐ New Approach Methodologies

    European Food Safety Authority, 2022. Key words: New approach methodologies, animal testing alternatives, risk assessment. Correspondence: [email protected]. Amendment: Authorship of this technical report was rearranged on the 27th of June 2022. Note: Revised theme paper in light of the comments received from the European Commission (DG ...

  6. PDF New Approach Methodologies (NAMs)

    New Approach Methodologies (NAMs) refer to any technology, methodology, approach, or combination thereof that can provide information on chemical hazard, exposure, and risk assessment. These include in vitrotests, in chemicoassays and. in silicomodels. NAMs Research Approach. EPA's Office of Research and Develoment (ORD) is actively ...

  7. An evaluation framework for new approach methodologies (NAMs) for human

    Hence, improvement is needed in predicting adverse events, and new approach methodologies (NAMs) have great potential. For many years, the pharmaceutical industry has used a discovery toxicology approach that incorporates addressing safety early in the pipeline to decrease attrition due to unanticipated toxicity identified only later in ...

  8. Use of New Approach Methodologies (NAMs) in regulatory decisions for

    New Approach Methodologies (NAMs) are considered to include any in vitro, in silico or chemistry-based method, as well as the strategies to implement them, that may provide information that could inform chemical safety assessment. Current chemical legislation in the European Union is limited in its acceptance of the widespread use of NAMs.

  9. Scientific Usefulness of New Approach Methodologies

    New approach methodologies (NAMs) are defined as any technology, methodology, approach, or combination that can provide information on chemical hazard and risk assessment, and avoid the use of animals, and may include in silico, in chemico, in vitro, and ex vivo approaches. 1, 2. The usefulness of NAMs is gaining recognition as a number of ...

  10. New Approach Methods Work Plan

    EPA's New Approach Methods (NAMs) Work Plan was created to prioritize agency efforts and resources toward activities that will reduce the use of vertebrate animal testing while continuing to protect human health and the environment. The first Work Plan was released in June 2020 and an updated Work Plan was released in December 2021. EPA NAM ...

  11. New approach methodologies (NAMs) for human-relevant ...

    For almost fifteen years, the availability and regulatory acceptance of new approach methodologies (NAMs) to assess the absorption, distribution, metabolism and excretion (ADME/biokinetics) in chemical risk evaluations are a bottleneck. To enhance the field, a team of 24 experts from science, indust …

  12. New approach methodologies in human regulatory toxicology

    Society is questioning overall performance, sustainability, continued relevance for human health risk assessment and ethics of this system, demanding a change of paradigm. At the same time, the scientific toolbox used for risk assessment is continuously enriched by the development of "New Approach Methodologies" (NAMs).

  13. What Is a Research Methodology?

    Step 1: Explain your methodological approach. Step 2: Describe your data collection methods. Step 3: Describe your analysis method. Step 4: Evaluate and justify the methodological choices you made. Tips for writing a strong methodology chapter. Other interesting articles.

  14. A tutorial on methodological studies: the what, when, how and why

    Background Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of ...

  15. New Materialist Methods and the Research Process

    Rather, our approach to diffraction was an effort to "work the limits" (Mazzei, 2013, p. 743) of theory-method to prompt new connections, relations, collaborations, and transformations. As we explain below, diffractive methodology also has significant implications for knowing the boundaries of our researching bodies, positionalities, and ...

  16. Research Methods

    You can also take a mixed methods approach, where you use both qualitative and quantitative research methods.. Primary vs. secondary research. Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys, observations and experiments). Secondary research is data that has already been collected by other researchers (e ...

  17. Mixed Methods Research

    Revised on June 22, 2023. Mixed methods research combines elements of quantitative research and qualitative research in order to answer your research question. Mixed methods can help you gain a more complete picture than a standalone quantitative or qualitative study, as it integrates benefits of both methods.

  18. The New Methodology

    The New Methodology. In the past few years there's been a blossoming of a new style of software methodology - referred to as agile methods. Alternatively characterized as an antidote to bureaucracy or a license to hack they've stirred up interest all over the software landscape. In this essay I explore the reasons for agile methods, focusing ...

  19. The Ultimate Guide To Research Methodology

    Here is why Research methodology is important in academic and professional settings. Facilitating Rigorous Inquiry. Research methodology forms the backbone of rigorous inquiry. It provides a structured approach that aids researchers in formulating precise thesis statements, selecting appropriate methodologies, and executing systematic ...

  20. Project management methodologies: 12 popular frameworks

    1. Agile. What it is: The Agile project management methodology is one of the most common project management processes. But the reality is that Agile isn't technically a methodology. Instead, it's best defined as a project management principle. The basis of an Agile approach is: Collaborative. Fast and effective.

  21. Research Methodology

    Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect, analyze, and interpret data to answer research questions or solve research problems.

  22. Methodology vs Approach: Meaning And Differences

    An approach is a way of doing something or a particular method of tackling a problem. It is a more general term than methodology and refers to the overall strategy or plan that is used to achieve a particular goal. An approach can be informal and flexible, or it can be more structured and formalized.

  23. Shaping the Future of Destinations: New Clues to Smart Tourism ...

    This study aims to explore smart tourism through neuroscientific methods in order to shape the future of tourism destinations, using a hybrid methodology combining bibliometric techniques and content analysis. The findings reveal the integration of diverse scientific domains, highlighting a transdisciplinary approach.

  24. New approach enhances accelerator's capability to uncover clues ...

    More information: Yang Zhang et al, Stepped-up development of accelerator mass spectrometry method for the detection of 60Fe with the HI-13 tandem accelerator, Nuclear Science and Techniques (2024 ...

  25. Uncertainty‐Based Capacity Factors of ...

    This study proposes a novel framework that couples the general likelihood uncertainty estimation (GLUE) method with a deterministic forecasting approach to conduct a new uncertainty analysis approach for assessing the energy production of operational wind turbines installed in the Jhongtun wind farm at Penghu (an island in the middle of Taiwan Strait).

  26. Use of new approach methodologies (NAMs) to meet regulatory

    New approach methodologies (NAMs) are increasingly being used for regulatory decision making by agencies worldwide because of their potential to reliably and efficiently produce information that is fit for purpose while reducing animal use. This article summarizes the ability to use NAMs for the assessment of human health effects of industrial ...

  27. Closed-loop optogenetic neuromodulation enables high-fidelity fatigue

    However, this approach causes poor fine-force modulation and quickly leads to fatiguing of the muscles after brief stimulation. Herrera-Arcos et al. now present a closed-loop method for neuromodulation using optogenetics that alleviates the challenges associated with electrical stimulation. A biophysical model was also developed to understand ...

  28. Mote tests new design in effort to restore coral reef in Florida keys

    Mote partners with combat wounded veterans on novel approach to coral reef restoration Focus in 'common garden' test shifts to massive, slow growing corals to build back reefs the same way the ...

  29. Estimation of Scattering Centers by Improved Two-Dimensional ESPRIT

    A new approach is proposed to estimate the two-dimensional (2-D) scattering centers. The approach combines the idea of Total least squares (TLS) with the 2-D ESPRIT type method. Numerical results show that the approach provides lower estimation errors than the 2-D ESPRIT type method and the Matrix enhancement and matrix pencil (MEMP) method with a new pairing procedure. The approach is ...