Advertisement

Issue Cover

  • Previous Article
  • Next Article

Introduction

Smart design, hypothetical melanoma smart, disclosure of potential conflicts of interest, authors' contributions, acknowledgments, sequential, multiple assignment, randomized trial designs in immuno-oncology research.

  • Funder(s):  Memorial Sloan Kettering Cancer Center
  • Award Id(s): P30 CA008748
  • Principal Award Recipient(s): K.S. M.A. C.   Panageas Postow Thompson
  • Funder(s):  PCORI
  • Award Id(s): ME-1507-31108
  • Principal Award Recipient(s): K.M.   Kidwell
  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Open the PDF for in another window
  • Get Permissions
  • Cite Icon Cite
  • Search Site
  • Version of Record February 14 2018
  • Proof January 25 2018
  • Accepted Manuscript August 23 2017

Kelley M. Kidwell , Michael A. Postow , Katherine S. Panageas; Sequential, Multiple Assignment, Randomized Trial Designs in Immuno-oncology Research. Clin Cancer Res 15 February 2018; 24 (4): 730–736. https://doi.org/10.1158/1078-0432.CCR-17-1355

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Clinical trials investigating immune checkpoint inhibitors have led to the approval of anti–CTLA-4 (cytotoxic T-lymphocyte antigen-4), anti–PD-1 (programmed death-1), and anti–PD-L1 (PD-ligand 1) drugs by the FDA for numerous tumor types. In the treatment of metastatic melanoma, combinations of checkpoint inhibitors are more effective than single-agent inhibitors, but combination immunotherapy is associated with increased frequency and severity of toxicity. There are questions about the use of combination immunotherapy or single-agent anti–PD-1 as initial therapy and the number of doses of either approach required to sustain a response. In this article, we describe a novel use of sequential, multiple assignment, randomized trial (SMART) design to evaluate immune checkpoint inhibitors to find treatment regimens that adapt within an individual based on intermediate response and lead to the longest overall survival. We provide a hypothetical example SMART design for BRAF wild-type metastatic melanoma as a framework for investigating immunotherapy treatment regimens. We compare implementing a SMART design to implementing multiple traditional randomized clinical trials. We illustrate the benefits of a SMART over traditional trial designs and acknowledge the complexity of a SMART. SMART designs may be an optimal way to find treatment strategies that yield durable response, longer survival, and lower toxicity. Clin Cancer Res; 24(4); 730–6. ©2017 AACR .

Clinical trials investigating immune checkpoint inhibitors have led to the approval of anti–CTLA-4 (cytotoxic T-lymphocyte antigen-4), anti–PD-1 (programmed death-1), and anti–PD-L1 (PD-ligand 1) drugs by the FDA for numerous tumor types. Immune checkpoint inhibitors are a novel class of immunotherapy agents that block normally negative regulatory proteins on T cells and enable immune system activation. By activating the immune system rather than directly attacking the cancer, immunotherapy drugs differ from cytotoxic chemotherapy and oncogene-directed molecularly targeted agents. Cytotoxic chemotherapy or molecularly targeted agents generally provide clinical benefit during treatment and usually not after treatment discontinuation, whereas immunotherapy benefit may persist after treatment discontinuation.

The anti–CTLA-4 drug ipilimumab was approved for the treatment of metastatic melanoma in 2011 and as adjuvant therapy for resected stage III melanoma in 2015. Inhibition of CTLA-4 is also being tested in other malignancies. In melanoma, ipilimumab improves overall survival but is associated with 20% grade 3/4 immune related adverse events ( 1–6 ). Agents that inhibit PD-1 and PD-L1 have less immune-related adverse events than CTLA-4–blocking agents ( 7 ). PD-1 and PD-L1 agents have been approved by the FDA for use in multiple malignancies including, but not limited to, melanoma (nivolumab and pembrolizumab), non–small cell lung cancer (NSCLC; nivolumab, pembrolizumab, and atezolizumab), renal cell carcinoma (nivolumab), and urothelial carcinoma (atezolizumab; refs. 8–10 ). Combinations of checkpoint inhibitors that block both CTLA-4 and PD-1 are more effective than CTLA-4 blockade alone (ipilimumab) in patients with melanoma, but combination immunotherapy is associated with increased frequency and severity of toxicity. Although we build our framework on the FDA-approved combination of anti–PD-1 therapy and ipilimumab, as this is reflects the current landscape, one could replace the anti–PD-1 and ipilimumab combination with anti–PD-1 and any drug to reflect novel combination agents that may become available down the pipeline, such as inhibitors of indoleamine-2,3-dioxygenase (IDO).

Some individuals may not need combination therapy because they may respond to a single agent, and these individuals should not be subjected to increased toxicities associated with combination therapy. Defining this group of individuals, however, is difficult. Many trials are being proposed to evaluate combinations or sequences of immunotherapy drugs alone in combination with other treatments such as chemotherapy, radiation, and targeted therapies, or with varied doses and schedules (sequential versus concurrent). The goal of these trials is to increase efficacy and decrease toxicity ( 11 ).

The long-term effect of immune activation by these drugs is unknown. It is also unknown whether individuals need continued treatment. Oncologists must optimize a balance in the clinic, incorporating observed efficacy and toxicity, and informally implement treatment pathways so that treatment may change for an individual depending on the individual's status. Many of these treatment pathways are ad hoc , based on the physician's experience and judgement or information pieced together from several randomized clinical trials. Formalized, evidence-based treatment pathways to inform decision-making over the course of care are needed. Formal, evidence-based treatment guidelines that adapt treatment based on a patient's outcomes, including efficacy and toxicity, are known as treatment pathways, dynamic treatment regimens ( 12 ), or adaptive interventions ( 13 ). Specifically, a treatment pathway is a sequence of treatment guidelines or decisions that indicate if, when, and how to modify the dosage or duration of interventions at decision stages throughout clinical care ( 14 ). For example, in treating individuals with stage III or stage IV Hodgkin lymphoma, one treatment pathway is as follows: “Treat with two cycles of doxorubicin, bleomycin, vinblastine and dacarbazine (ABVD). At the end of therapy (6 to 8 weeks), perform positron emission tomography/computed tomography (PET/CT) imaging. Treat with an additional 4 cycles of ABVD if the scan scores 1–3 on the Deauville scale (considered a negative scan). Otherwise, if the scan scores 4–5 on the Deauville scale (considered a positive scan), switch treatment to escalated bleomycin, etoposide, docorubicin, cyclophosphamide, vincreistine, procarbazine and prednisone (eBEACOPP) for 6 cycles ( 15 ).” Note that one treatment pathway includes an initial treatment followed by subsequent treatment that depends on an intermediate outcome for all possibilities of that intermediate outcome.

Treatment pathways are difficult to develop in traditional randomized clinical trial settings because they specify adapting treatments over time for an individual based on response and/or toxicity. Treatments may have delayed effects such that the best initial treatment is not a part of the best overall treatment regimen. For example, one treatment may initially produce the best response rate, but that treatment may also be so aggressive that for those who did not have a response, they cannot tolerate additional treatment; whereas another treatment may produce a lower proportion of responders initially but can be followed by an additional treatment to rescue more nonresponders and lead to a better overall response rate and longer survival. Thus, treatments in combination or sequence do not necessarily result in overall best outcomes. The sequential, multiple assignment, randomized trial (SMART; refs. 16, 17 ) is a multistage trial that is designed to develop and investigate treatment pathways. SMART designs can investigate delayed effects as well as treatment synergies and antagonisms, and provide robust evidence about the timing, sequences, and combinations of immunotherapies. Furthermore, treatment pathways may be individualized to find baseline and time-varying clinical and pathologic characteristics associated with optimal response.

In this article, we describe a novel use of SMART design to evaluate immuno-oncologic agents. We provide a hypothetical example SMART design for metastatic melanoma as a framework for investigating immunotherapy treatment. We compare implementation of a SMART design with implementation of multiple traditional randomized clinical trials. We illustrate the benefits of a SMART over traditional trial designs and acknowledge the complexity of a SMART. SMART designs may be an optimal way to find treatment strategies that yield durable response, longer survival, and lower toxicity.

A SMART is a multistage, randomized trial in which each stage corresponds to an important treatment decision point. Participants are enrolled in a SMART and followed throughout the trial, but each participant may be randomized more than once. Subsequent randomizations allow for unbiased comparisons of post-initial randomization treatments and comparisons of treatment pathways. The goal of a SMART is to develop and find evidence of effective treatment pathways that mimic clinical practice.

In a generic two-stage SMART, participants are randomized between several treatments (usually 2–3; Fig. 1 ). Participants are followed, and an intermediate outcome is assessed over time or at a specific time. On the basis of the intermediate outcome, participants may be classified into groups, and they may be re-randomized to subsequent treatment. The intermediate outcome is a measure of early success or failure that allows the identification of those who may benefit from a treatment change. This intermediate outcome, also known as a tailoring variable, should have only a few categories so that it is a low-dimensional summary that is well defined, agreed upon, implementable in practice and gives early information about the overall endpoint. This intermediate outcome does not need to be defined as response/nonresponse, or more specifically as tumor response, but rather, it may be defined differently, such as adherence to treatment, a composite of efficacy measures, or efficacy and toxicity measures. It is imperative that the intermediate outcome is validated and replicable. Although the two-stage design is most commonly used, SMARTs are not limited to two stages, such as a SMART that investigated treatment strategies in prostate cancer ( 18 ).

Figure 1. A generic two-stage SMART design where participants are randomized between any number of treatments A1 to AJ. Response is measured at some intermediate time point or over time such that responders are re-randomized in the second stage between any number of treatments B1 to BK and nonresponders are re-randomized between any number of treatments C1 to CL. The same participants are followed throughout the trial. R denotes randomization.

A generic two-stage SMART design where participants are randomized between any number of treatments A 1 to A J . Response is measured at some intermediate time point or over time such that responders are re-randomized in the second stage between any number of treatments B 1 to B K and nonresponders are re-randomized between any number of treatments C 1 to C L . The same participants are followed throughout the trial. R denotes randomization.

A SMART is similar to other commonly used trial designs but has unique features that enable the development of robust evidence of effective treatment strategies. The SMART design is a type of sequential factorial trial design in which the second-stage treatment is restricted based on the previous response. A SMART design is similar to a crossover trial in that the same participants are followed throughout the trial and participants may receive multiple treatments. However, in a SMART, subsequent treatment is based on the response to the previous treatment, and a SMART design takes advantage of treatment interactions as opposed to washing out treatment effects (i.e., a SMART does not require time in between treatments to eliminate carryover effects from the initial treatment on the assessment of the second-stage treatment).

We focus this overview on SMART designs that are nonadaptive. In a nonadaptive SMART, the operating characteristics of the trial, including randomization probabilities and eligibility criteria, are predetermined and fixed throughout the trial. Treatment may adapt within a participant based on intermediate response, but randomization probabilities or other trial-operating characteristics do not change for future participants based on previous participants' results.

By following the same participants over the trial, a SMART enables the development of evidence for treatment pathways that specify an initial treatment, followed by a maintenance treatment for responders and rescue treatment for nonresponders. These treatment pathways are embedded within a SMART design, but within the trial, participants are randomized to treatments based on the intermediate outcome to enable unbiased comparisons and valid causal inference. The end goal of the trial is to provide definitive evidence for treatment pathways to be used in practice. The SMART design has been used in oncology ( 19, 20 ), mental health ( 21 ), and other areas ( 22 ), but to our knowledge, this is the first description of using a SMART in immuno-oncology.

Ipilimumab and anti–PD-1 therapy currently are approved to treat metastatic melanoma. However, combinations of these and other immunotherapy drugs may cause toxic events, and it remains unclear whether patients should start with these combinations or start with single agent anti–PD-1 therapy and receive these additional treatments upon disease progression. There are also questions about the number of doses required to sustain a response for single-agent or combination therapy. The best treatment strategy that may provide enough therapy for sustained response and limit toxicities is unknown. A SMART design may address these questions to provide rigorous evidence for the best immunotherapy treatment pathway for individuals. Our proposed example focuses on patients with BRAF wild-type metastatic melanoma to avoid complexities of additionally considering incorporation of BRAF and MEK inhibitors into the treatment regimen of patients with BRAF-mutant melanoma.

In a hypothetical SMART design to investigate treatment strategies, including anti–PD-1 therapy and ipilimumab, participants may be randomized in the first stage to receive four doses of single-agent anti–PD-1 therapy (pembrolizumab 2 mg/kg or nivolumab 240 mg) or combination nivolumab (1 mg/kg), and ipilimumab (3 mg/kg; Fig. 2 , note these drugs could be replaced with any novel immunotherapy or approved drug). During follow-up, participants would be evaluated for their tumor response; the intermediate outcome in this SMART would be defined by disease response after four doses of immunotherapy (week 12). Although Response Evaluation Criteria in Solid Tumors (RECIST) could be used to define disease response, favorable response could also be defined as any decline in total tumor burden, even in the presence of new lesions, as specified by principles related to immune-related response criteria ( 23 ).

Figure 2. A hypothetical two-stage SMART design in the setting of BRAF wild-type metastatic melanoma. Participants are initially randomized to either single-agent anti–PD-1 therapy or to a combination of anti–PD-1 therapy + ipilimumab (Ipi). Note that Ipi may be replaced by any novel combination agent. After four doses or approximately 12 weeks, response is measured. Those who did not respond to the single agent are re-randomized to receive Ipi or the combination. Those who did respond to single-agent anti–PD-1 are re-randomized to continue the single agent or discontinue therapy. Those who did not respond initially to the combination receive standard of care and those who did respond are re-randomized to continue the combination or discontinue therapy. Subgroups 1 to 7 denote the subgroups that any one participant may fall into. There are six embedded treatment pathways in this SMART, and each one is made up of 2 subgroups: {1,3}, {1,4}, {2,3}, {2,4}, {5,6}, and {5,7}. R denotes randomization.

A hypothetical two-stage SMART design in the setting of BRAF wild-type metastatic melanoma. Participants are initially randomized to either single-agent anti–PD-1 therapy or to a combination of anti–PD-1 therapy + ipilimumab (Ipi). Note that Ipi may be replaced by any novel combination agent. After four doses or approximately 12 weeks, response is measured. Those who did not respond to the single agent are re-randomized to receive Ipi or the combination. Those who did respond to single-agent anti–PD-1 are re-randomized to continue the single agent or discontinue therapy. Those who did not respond initially to the combination receive standard of care and those who did respond are re-randomized to continue the combination or discontinue therapy. Subgroups 1 to 7 denote the subgroups that any one participant may fall into. There are six embedded treatment pathways in this SMART, and each one is made up of 2 subgroups: {1,3}, {1,4}, {2,3}, {2,4}, {5,6}, and {5,7}. R denotes randomization.

In the second stage of the trial, responders to either initial treatment would be re-randomized to continue versus discontinue their initial treatment. Specifically, participants who responded to single agent anti–PD-1 would be re-randomized to continue current treatment for additional doses up to 2 years or to discontinue treatment, and participants who responded to the combination of anti–PD-1 + ipilimumab would be re-randomized to continue anti–PD-1 maintenance or discontinue treatment. Participants who did not respond to single-agent anti–PD-1 by 12 weeks would be re-randomized to receive ipilimumab or the combination of anti–PD-1 and ipilimumab. Participants who did not respond to the combination therapy would receive the standard of care (e.g., oncogene-directed targeted therapy if appropriate, chemotherapy, or considered for clinical trials; Fig. 2 ). As newer drugs become available and are promising for nonresponders to combination therapy, we anticipate that there could be an additional randomization for these nonresponders to explore additional treatment pathways. All participants would be followed for at least 28 months. The overall outcome of the trial would be overall survival. Any participant who experienced major toxicity at any time or progressive disease in the second stage would be removed from the study and treated as directed by the treating physician.

Participants belong to one subgroup ( Fig. 2 ) in a SMART. Two subgroups make up one treatment pathway, since a treatment pathway describes the clinical guidelines for initial treatment and subsequent treatment for both responders and nonresponders ( Fig. 2 ). Although there are seven subgroups that a participant may belong to, there are six embedded treatment pathways in this SMART design. The six treatment pathways include the following:

(1) First begin with single-agent anti–PD-1 therapy. If no response to single-agent anti–PD-1 therapy, then switch to single-agent ipilimumab. If response to single-agent anti–PD-1, then continue treatment (subgroups 1 and 3);

(2) First begin with single-agent anti–PD-1 therapy. If no response to single-agent anti–PD-1 therapy, then switch to single-agent ipilimumab. If response to single-agent anti–PD-1, then discontinue treatment (subgroups 1 and 4);

(3) First begin with single-agent anti–PD-1 therapy. If no response to single-agent anti–PD-1 therapy, then add ipilimumab to anti–PD-1 therapy. If response to single-agent anti–PD-1 therapy, then continue treatment (subgroups 2 and 3);

(4) First begin with single-agent anti–PD-1 therapy. If no response to single-agent anti–PD-1 therapy, then add ipilimumab to anti–PD-1 therapy. If response to single-agent anti–PD-1 therapy, then discontinue treatment (subgroups 2 and 4);

(5) First begin with combination anti–PD-1 therapy + ipilimumab. If no response to combination anti–PD-1 therapy + ipilimumab, then receive standard of care. If response to combination anti–PD-1 therapy + ipilimumab, then continue treatment (subgroups 5 and 6); and

(6) First begin with combination anti–PD-1 therapy + ipilimumab. If no response to combination anti–PD-1 therapy + ipilimumab then receive standard of care. If response to combination anti–PD-1 therapy + ipilimumab, then discontinue treatment (subgroups 5 and 7).

A SMART may have several scientific aims, some of which may resemble those of traditional trials and some, pertaining to the treatment pathways, differ. It is important, as in standard trials, to identify and power on a primary aim. Subsequent aims and multiple comparisons may be additionally powered for using any type I error-control method ( 24 ). In metastatic melanoma, the SMART may be interested in answering one of following four questions:

(1) Does a treatment strategy that begins with single-agent anti–PD-1 or combination anti–PD-1 and ipilimumab therapy lead to the longest overall survival?

(2) For responders to initial therapy, does continuing or discontinuing treatment provide the longest overall survival?

(3) For nonresponders to single-agent anti–PD-1 therapy, does ipilimumab or the combination of ipilimumab and anti–PD-1 therapy provide the longest overall survival?

(4) Is there a difference in the overall survival between the six embedded treatment pathways?

Questions similar to numbers 1, 2, and 3 could be answered in three separate, traditional, parallel-arm clinical trials. The traditional paradigm would run a single-stage trial (e.g., single-agent vs. combination therapy) to determine the most effective therapy. A first trial may investigate single agent anti–PD-1 versus the combination of anti–PD-1 and ipilimumab. Another trial with a randomized discontinuation design could identify if continuing or discontinuing treatment leads to longer overall survival for individuals who received the most effective therapy (e.g., anti–PD-1 alone or in combination with ipilimumab). And a third trial could determine for those refractory to anti–PD-1 therapy, if ipilimumab or the combination of ipilimumab and anti–PD-1 therapy results in longer survival. For each of these three traditional trials, power and analyses are standard in terms of powering for and analyzing a two-group comparison with a survival outcome.

If question 1, 2, or 3 is the primary aim of a SMART, the sample size and analysis plan is also standard; however, for questions 2 and 3, the calculated sample size must be inflated. For question 2, the sample size must be inflated on the basis of the assumed response rates to first-stage therapies. Specifically, if 40% respond to single-agent therapy and 55% to combination therapy, the calculated two-group comparison sample size must be increased by these amounts to ensure that in the SMART there will be sufficient responders in the second stage. For question 3, the sample size must also be inflated for the expected percentage of nonresponders to anti–PD-1 therapy. Similarly, in a standard one-stage trial to address question 2 (or 3), more patients would need to be screened to account for the response status, but unlike a SMART, the nonresponders (responders) would not be followed. Furthermore, implementing three separate trials may not provide robust evidence for entire treatment pathways and instead provide evidence for only the best treatments at specific time points.

For a SMART powered on question number 1, 2, or 3, the analysis of treatment pathways would be exploratory and hypothesis generating to be confirmed in a follow-up trial. Instead, the SMART may be powered to compare the embedded treatment pathways (question 4) in contrast with the stage-specific differences. Comparisons of pathways require power calculations and analytic methods specific to SMART designs. Currently, the only sample-size calculator available for a SMART design with a survival outcome compares two specific treatment pathways using a weighted log-rank test. This calculator is only applicable for designs similar to the hypothetical melanoma SMART if the non-responders to anti–PD-1 therapy were not re-randomized (i.e., if there were only 4 embedded treatment pathways instead of 6; ref. 25 ). Any other SMART design (e.g., our hypothetical design in Fig. 2 ) or any other test (e.g., a global test of equality across all treatment pathways or finding the best set of treatment pathways using multiple comparisons with the best) requires statistical simulation. Other sample size calculations exist for survival outcomes but do not have an easy-to-implement calculator ( 26, 27 ). Methods are available to estimate survival ( 28, 29 ) and compare ( 25, 26, 30–32 ) treatment pathways with survival outcomes, and R packages ( 33 ) can aid in the analysis.

In this example, we calculate sample sizes of implementing three single trials versus implementing one trial using a SMART design. For the first single-stage trial, we assume a log-rank test, survival rates of 80% and 68%, respectively, at 1 year for combination and single agent anti–PD-1, exponential survival distributions, 1 year for accrual, and an additional 2.5 years of follow-up. The same assumptions were applied for continuing initial treatment versus discontinuing the initial (this is a conservative sample size for this trial, since the survival rates at 1 year would likely be closer together and require more patients). To have the same assumptions across the single-stage trials and SMART design, the survival rate at 1 year for those who did not respond to single agent anti–PD-1 therapy and received ipilimumab was set at 68% and for those who received the combination anti–PD-1 and ipilimumab was set to 74%. Parameters for the SMART were specified to mimic the single-stage settings with the additional assumptions of a response rate to initial therapy being 40% and 1-year survival rates of 69%, 68%, 75%, 74%, 80%, and 74% for the treatment pathways 1 through 6, respectively. For the SMART, a weighted log-rank test of any difference in the six treatment pathways was used for power via simulation ( 30, 33 ). With these assumptions, 570 participants are required to observe any difference in the six embedded treatment pathways within 1 SMART ( Table 1 ). This sample size is less than the 1,142 participants that are required by summing the sample sizes with the same assumptions using three traditional single-stage trials. We note that using a global test in the SMART allows for less participants, and that potentially, one of the trials in the single-stage trial setting may be dropped on the basis of previous trial results. However, a SMART allows us to answer many questions simultaneously and find optimal treatment pathways potentially ignored in the single-stage setting.

Comparison of the sample sizes needed for three single trials versus one SMART design

NOTE: The trials in approach 1 would require a total of 1,142 participants versus 570 total participants from one SMART.

A SMART would most likely require less time from start to finish than the single-stage trials because it is unlikely that the single-stage trials would run simultaneously (because the trials based on response to initial treatment would require an actionable result from the first trial; ref. 34 ). Furthermore, because participants are followed throughout the trial and offered follow-up treatment, individuals may be more likely to enroll in the SMART (i.e., the sample of participants in a SMART may be more generalizable) and adhere to treatment ( 34 ).

Beyond the sequences of treatments in a SMART design that are tailored to an individual based on intermediate outcome, additional analyses (like subgroup analyses in traditional trials) may evaluate more individualized treatment pathways. Information, including demographic, clinical, and pathologic data collected at baseline and between baseline and the measurement of the intermediate outcome, may be used to further individualize treatment sequences for better overall survival. To further personalize treatment pathways, the analysis requires methods specific for SMART data such as Q-learning or other similar methods ( 35, 36 ). Briefly, Q-learning, borrowed from computer science, is an extension of regression to sequential treatments ( 37 ). Q-learning is a series of regressions used to construct a sequence of treatment guidelines that maximize the outcome (e.g., find more detailed treatment pathways that include baseline and time-varying variables associated with the longest survival). It may be as beneficial for some individuals to receive single-agent as combination therapy even when combination therapy is better when averaged across all individuals. In addition, a subgroup of individuals may benefit more from single-agent therapy because of savings in cost and toxicity compared to combination therapy. These questions are unlikely to be powered for in the SMART, but a priori hypotheses can direct analysis and lead to the identification of more personalized treatment pathways that can be validated in subsequent trials.

This article has focused on an example SMART in BRAF wild-type metastatic melanoma to answer questions about the best treatment pathways, including ipilimumab and anti–PD-1 therapy. As new immunotherapies are available for trials, ipilimumab may ultimately be replaced in this type of design by one of the more novel drugs (e.g., inhibitors of the immunosuppressive enzyme IDO or other checkpoint inhibitors such as drugs targeting lymphocyte-activation gene 3, “LAG-3”). Our proposed SMART design could be considered as a template for testing any number of these potential future possible combinations.

A SMART design may be a more efficient trial design to understand which immunotherapy treatment pathways in BRAF wild-type metastatic melanoma lead to the longest overall survival. SMARTs can definitively evaluate the treatment pathways that many physicians use in practice, leading to the recommendation of treatments over time based on individual response. A single SMART can enroll and continue to follow participants throughout the course of care to provide evidence for beginning treatment with single-agent anti–PD-1 or combination therapy and the optimal number of doses needed to sustain a response while limiting toxicity.

Of course, a SMART design is not limited to providing robust evidence for treatment pathways in BRAF wild-type metastatic melanoma but can help develop and test treatment pathways that lead to optimal outcomes in other melanomas, cancers, and diseases. We acknowledge our SMART proposal is inherently limited by heterogeneity in some of the treatment pathways, such as in the “Standard-of-care” box in subgroup 5. In our melanoma example, this box could include diverse treatments such as chemotherapy, inhibitors of other molecular drivers such as imatinib for patients with KIT mutations, and other potentially effective immunotherapy agents. How the various treatments within this pathway affect overall outcomes remains unknown in our proposed design.

A SMART requires less overall participants and can be implemented and analyzed in a shorter period of time than executing several single-stage, standard two-arm trial designs ( 34 ). However, a commitment to more participants at the initiation of the trial for a SMART is needed than for individual standard trials, and logistics may be more complex in a SMART by re-randomizing participants at an intermediate time point ( 34 ). With current technology that can handle multisite interim randomizations or the ability to randomize participants upfront to follow particular treatment pathways, the increased logistics should not outweigh the benefits of finding optimal immunotherapy treatment pathways from SMART designs.

The SMART design, even when powered on questions regarding the best initial treatment in a pathway or best strategy for responders or nonresponders (i.e., question 1, 2, or 3 from the previous section), may be more beneficial than multiple traditional single-stage designs. A SMART can conclusively answer one question with additional analyses to address questions concerning treatment pathways that may be relevant to clinical practice, such as how long to remain on immunotherapy. Furthermore, SMART designs can identify treatment interactions when treatments differ in the first and second stages (i.e., a SMART design that differs from that in Fig. 2 by re-randomizing to different treatments in the second stage as opposed to continuing or discontinuing initial treatment), and there may be delayed effects of initial treatments that modify the effects of follow-up treatments. Single-stage trials cannot evaluate these interactions between first and second-stage treatments dependent on intermediate outcomes.

More novel trial designs, including the SMART, may be needed to answer pertinent treatment questions and provide robust evidence for effective treatment regimens, especially in immuno-oncology research where novel combinations are frequently being proposed. A SMART can examine treatment sequences and combinations of immunotherapies and other drugs that lead to the longest overall survival with decreased toxicities. SMART designs may be able to verify potential optimal treatment pathways identified from dynamic mathematical modeling ( 38 ). SMARTs may require a paradigm shift for practicing physicians, pharmaceutical companies, and guidance agencies to begin to test and approve treatment regimens that may adapt within an individual along the course of care, as opposed to testing and approving agents at particular snapshots in time and piecing these snapshots together trusting that these pieces tell the full story.

M.A. Postow reports receiving commercial research grants from Bristol-Myers Squibb, speakers bureau honoraria from Bristol-Myers Squibb and Merck, and is a consultant/advisory board member for Array BioPharma, Bristol-Myers Squibb, Merck, and Novartis. No potential conflicts of interest were disclosed by the other authors.

Conception and design: K.M. Kidwell, M.A. Postow, K.S. Panageas

Development of methodology: K.M. Kidwell, K.S. Panageas

Analysis and interpretation of data (e.g., statistical analysis, biostatistics, computational analysis): K.M. Kidwell, K.S. Panageas

Writing, review, and/or revision of the manuscript: K.M. Kidwell, M.A. Postow, K.S. Panageas

Study supervision: K.S. Panageas

This study was support by the Memorial Sloan Kettering Cancer Center Core grant (P30 CA008748; to K.S. Panageas and M.A. Postow; principle investigator: C. Thompson) and PCORI Award (ME-1507-31108; to K.M. Kidwell).

Citing articles via

Email alerts.

  • Online First
  • Collections
  • Online ISSN 1557-3265
  • Print ISSN 1078-0432

AACR Journals

  • Blood Cancer Discovery
  • Cancer Discovery
  • Cancer Epidemiology, Biomarkers & Prevention
  • Cancer Immunology Research
  • Cancer Prevention Research
  • Cancer Research
  • Cancer Research Communications
  • Clinical Cancer Research
  • Molecular Cancer Research
  • Molecular Cancer Therapeutics
  • Info for Advertisers
  • Information for Institutions/Librarians

multiple assignment randomized trial

  • Privacy Policy
  • Copyright © 2023 by the American Association for Cancer Research.

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Sequential, Multiple Assignment, Randomized Trials (SMART)

  • Living reference work entry
  • First Online: 12 August 2021
  • Cite this living reference work entry

Book cover

  • Nicholas J. Seewald 3 ,
  • Olivia Hackworth 3 &
  • Daniel Almirall 3  

222 Accesses

1 Citations

6 Altmetric

A dynamic treatment regimen (DTR) is a prespecified set of decision rules that can be used to guide important clinical decisions about treatment planning. This includes decisions concerning how to begin treatment based on a patient’s characteristics at entry, as well as how to tailor treatment over time based on the patient’s changing needs. Sequential, multiple assignment, randomized trials (SMARTs) are a type of experimental design that can be used to build effective dynamic treatment regimens (DTRs). This chapter provides an introduction to DTRs, common types of scientific questions researchers may have concerning the development of a highly effective DTR, and how SMARTs can be used to address such questions. To illustrate ideas, we discuss the design of a SMART used to answer critical questions in the development of a DTR for individuals diagnosed with alcohol use disorder.

  • Dynamic treatment regimen
  • Adaptive intervention
  • Tailoring variable
  • Sequential randomization
  • Multistage randomized trial

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Almirall D, DiStefano C, Chang Y-C, Shire S, Kaiser A, Lu X, Nahum-Shani I, Landa R, Mathy P, Kasari C (2016) “Longitudinal Effects of Adaptive Interventions With a Speech-Generating Device in Minimally Verbal Children With ASD.” J Clin Child Adolesc 45 (4): 442–56. https://doi.org/10.1080/15374416.2016.1138407

Almirall D, Nahum-Shani I, Lu W, Kasari C (2018) Experimental designs for research on adaptive interventions: singly and sequentially randomized trials. In: Collins LM, Kugler KC (eds) Optimization of behavioral, biobehavioral, and biomedical interventions: advanced topics, Statistics for social and behavioral sciences. Springer International Publishing, Cham, pp 89–120. https://doi.org/10.1007/978-3-319-91776-4_4

Chapter   Google Scholar  

August GJ, Piehler TF, Bloomquist ML (2016) Being ‘SMART’ about adolescent conduct problems prevention: executing a SMART pilot study in a juvenile diversion agency. J Clin Child Adolesc Psychol 45(4):495–509. https://doi.org/10/ghpbrn

Article   Google Scholar  

Cable N, Sacker A (2007) Typologies of alcohol consumption in adolescence: predictors and adult outcomes. Alcohol Alcoholism 43(1):81–90. https://doi.org/10/fpmm33

Chakraborty B, Moodie EEM (2013) Statistical Methods for Dynamic Treatment Regimes . Statistics for biology and health. Springer New York, New York, NY. https://doi.org/10.1007/978-1-4614-7428-9

Book   Google Scholar  

Cheung YK, Chakraborty B, Davidson KW (2015) Sequential multiple assignment randomized trial (SMART) with adaptive randomization for quality improvement in depression treatment program: SMART with adaptive randomization. Biometrics 71(2):450–459. https://doi.org/10.1111/biom.12258

Article   MathSciNet   MATH   Google Scholar  

Chronis-Tuscano A, Wang CH, Strickland J, Almirall D, Stein MA (2016) Personalized treatment of mothers with ADHD and their young at-risk children: a SMART pilot. J Clin Child Adolesc Psychol 45(4):510–521. https://doi.org/10/gg2h36

Collins LM, Nahum-Shani I, Almirall D (2014) Optimization of behavioral dynamic treatment regimens based on the sequential, multiple assignment, randomized trial (SMART). Clin Trials 11(4):426–434. https://doi.org/10/f6cjxm

Dragalin V (2006) Adaptive designs: terminology and classification. Drug Inf J 40(4):425–435. https://doi.org/10/ghpbrt

Article   MathSciNet   Google Scholar  

Dziak JJ, Yap JRT, Almirall D, McKay JR, Lynch KG, Nahum-Shani I (2019) A data analysis method for using longitudinal binary outcome data from a SMART to compare adaptive interventions. Multivar Behav Res 0(0):1–24. https://doi.org/10/gftzjg

Google Scholar  

Feng W, Wahed AS (2009) Sample size for two-stage studies with maintenance therapy. Stat Med 28(15):2028–2041. https://doi.org/10.1002/sim.3593

Gunlicks-Stoessel M, Mufson L, Westervelt A, Almirall D, Murphy SA (2016) A pilot SMART for developing an adaptive treatment strategy for adolescent depression. J Clin Child Adolesc Psychol 45(4):480–494. https://doi.org/10/ghpbrv

Hall KL, Nahum-Shani I, August GJ, Patrick ME, Murphy SA, Almirall D (2019) Adaptive intervention designs in substance use prevention. In: Sloboda Z, Petras H, Robertson E, Hingson R (eds) Prevention of substance use, Advances in prevention science. Springer International Publishing, Cham, pp 263–280. https://doi.org/10.1007/978-3-030-00627-3_17

Heilig M, Egli M (2006) Pharmacological treatment of alcohol dependence: target symptoms and target mechanisms. Pharmacol Ther 111(3):855–876. https://doi.org/10/cfs7df

Kasari C, Kaiser A, Goods K, Nietfeld J, Mathy P, Landa R, Murphy SA, Almirall D (2014) Communication interventions for minimally verbal children with autism: a sequential multiple assignment randomized trial. J Am Acad Child Adolesc Psychiatry 53(6):635–646. https://doi.org/10.1016/j.jaac.2014.01.019

Kidwell KM, Seewald NJ, Tran Q, Kasari C, Almirall D (2018) Design and analysis considerations for comparing dynamic treatment regimens with binary outcomes from sequential multiple assignment randomized trials. J Appl Stat 45(9):1628–1651. https://doi.org/10.1080/02664763.2017.1386773

Kilbourne AM, Almirall D, Eisenberg D, Waxmonsky J, Goodrich DE, Fortney JC, JoAnn E. Kirchner, et al. (2014) Protocol: adaptive implementation of effective programs trial (ADEPT): cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program. Implement Sci 9(1):132. https://doi.org/10/f6q9fc

Kilbourne AM, Smith SN, Choi SY, Koschmann E, Liebrecht C, Rusch A, Abelson JL et al (2018) Adaptive school-based implementation of CBT (ASIC): clustered-SMART for building an optimized adaptive implementation intervention to improve uptake of mental health interventions in schools. Implement Sci 13(1):119. https://doi.org/10/gd7jt2

Kosorok MR, Moodie EEM (eds) (2015) Adaptive treatment strategies in practice: planning trials and analyzing data for personalized medicine. Society for Industrial and Applied Mathematics, Philadelphia, PA. https://doi.org/10.1137/1.9781611974188

Book   MATH   Google Scholar  

Laber EB, Lizotte DJ, Qian M, Pelham WE, Murphy SA (2014) Dynamic treatment regimes: technical challenges and applications. Electron J Stat 8(1):1225–1272. https://doi.org/10/gg29c8

Lavori PW, Dawson R (2004) Dynamic treatment regimes: practical design considerations. Clin Trials 1(1):9–20. https://doi.org/10/cqtvnn

Lavori PW, Dawson R (2014) Introduction to dynamic treatment strategies and sequential multiple assignment randomization. Clin Trials 11(4):393–399. https://doi.org/10.1177/1740774514527651

Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA (2012) A ‘SMART’ design for building individualized treatment sequences. Annu Rev Clin Psychol 8(1):21–48. https://doi.org/10.1146/annurev-clinpsy-032511-143152

Li Z (2017) Comparison of adaptive treatment strategies based on longitudinal outcomes in sequential multiple assignment randomized trials. Stat Med 36(3):403–415. https://doi.org/10.1002/sim.7136

Li Z, Murphy SA (2011) Sample size formulae for two-stage randomized trials with survival outcomes. Biometrika 98(3):503–518. https://doi.org/10.1093/biomet/asr019

Longabaugh R, Zweben A, Locastro JS, Miller WR (2005) Origins, issues and options in the development of the combined behavioral intervention. J Stud Alcohol Suppl (15):179–187. https://doi.org/10/ghpb9f

Lu X, Nahum-Shani I, Kasari C, Lynch KG, Oslin DW, Pelham WE, Fabiano G, Almirall D (2016) Comparing dynamic treatment regimes using repeated-measures outcomes: modeling considerations in SMART studies. Stat Med 35(10):1595–1615. https://doi.org/10/gg2gxc

Lunceford JK, Davidian M, Tsiatis AA (2002) Estimation of survival distributions of treatment policies in two-stage randomization designs in clinical trials. Biometrics 58(1):48–57. https://doi.org/10/bk2dj9

McKay JR (2005) Is there a case for extended interventions for alcohol and drug use disorders? Addiction 100(11):1594–1610. https://doi.org/10/btpvtr

Meurer WJ, Lewis RJ, Berry DA (2012) Adaptive clinical trials: a partial remedy for the therapeutic misconception? JAMA-J Am Med Assoc 307(22):2377–2378. https://doi.org/10/gf3pmm

Moodie EEM, Richardson TS, Stephens DA (2007) Demystifying optimal dynamic treatment regimes. Biometrics 63(2):447–455. https://doi.org/10/ffcq8r

Murphy SA (2003) Optimal dynamic treatment regimes. J R Stat Soc B 65(2):331–355. https://doi.org/10/dmmr89

Murphy SA (2005) An experimental Design for the Development of adaptive treatment strategies. Stat Med 24(10):1455–1481. https://doi.org/10.1002/sim.2022

Murphy SA, Almirall D (2009) Dynamic treatment regimens. In: Encyclopedia of Medical Decision Making , 1:419–22. SAGE Publications, Thousand Oaks

Murphy SA, Bingham D (2009) Screening experiments for developing dynamic treatment regimes. J Am Stat Assoc 104(485):391–408. https://doi.org/10/dk2gpv

Naar-King S, Ellis DA, Carcone AI, Templin T, Jacques-Tiura AJ, Hartlieb KB, Cunningham P, Jen K-LC (2016) Sequential multiple assignment randomized trial (SMART) to construct weight loss interventions for African American adolescents. J Clin Child Adolesc Psychol 45(4):428–441. https://doi.org/10/gf4ks4

Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano GA, Waxmonsky JG, Yu J, Murphy SA (2012a) Q-learning: a data analysis method for constructing adaptive interventions. Psychol Methods 17(4):478–494. https://doi.org/10.1037/a0029373

Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano GA, Waxmonsky JG, Yu J, Murphy SA (2012b) Experimental design and primary data analysis methods for comparing adaptive interventions. Psychol Methods 17(4):457–477. https://doi.org/10.1037/a0029372

Nahum-Shani I, Ertefaie A, Xi (Lucy) Lu, Lynch KG, McKay JR, Oslin DW, Almirall D (2017) A SMART data analysis method for constructing adaptive treatment strategies for substance use disorders. Addiction 112(5):901–909. https://doi.org/10/ghpb9n

Nahum-Shani I, Almirall D, Yap JRT, McKay JR, Lynch KG, Freiheit EA, Dziak JJ (2020) SMART longitudinal analysis: a tutorial for using repeated outcome Measures from SMART studies to compare adaptive interventions. Psychol Methods 25(1):1–29. https://doi.org/10/ggttht

NeCamp T, Kilbourne A, Almirall D (2017) Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: regression estimation and sample size considerations. Stat Methods Med Res 26(4):1572–1589. https://doi.org/10.1177/0962280217708654

Oetting AI, Levy JA, Weiss RD, Murphy SA (2011) Statistical methodology for a SMART Design in the Development of adaptive treatment strategies. In: Shrout PE, Keyes KM, Ornstein K (eds) Causality and psychopathology: finding the determinants of disorders and their cures. Oxford University Press, New York, pp 179–205

Ogbagaber SB, Karp J, Wahed AS (2016) Design of Sequentially Randomized Trials for testing adaptive treatment strategies. Stat Med 35(6):840–858. https://doi.org/10.1002/sim.6747

Oslin DW, Berrettini WH, O’Brien CP (2006) Targeting treatments for alcohol dependence: the pharmacogenetics of naltrexone. Addict Biol 11(3–4):397–403. https://doi.org/10/fgcfbk

Pelham WE Jr, Fabiano GA, Waxmonsky JG, Greiner AR, Gnagy EM, Pelham WE III, Coxe S et al (2016) Treatment sequencing for childhood ADHD: a multiple-randomization study of adaptive medication and behavioral interventions. J Clin Child Adolesc Psychol 45(4):396–415. https://doi.org/10/gfn9xr

Quanbeck A, Almirall D, Jacobson N, Brown RT, Landeck JK, Madden L, Cohen A et al (2020) The balanced opioid initiative: protocol for a clustered, sequential, multiple-assignment randomized trial to construct an adaptive implementation strategy to improve guideline-concordant opioid prescribing in primary care. Implement Sci 15(1):26. https://doi.org/10/gjh5tx

Rubin DB (1974) Estimating causal effects of treatments in randomized and nonrandomized studies. J Educ Psychol 66(5):688–701. https://doi.org/10.1037/H0037350

Schmitz JM, Stotts AL, Vujanovic AA, Weaver MF, Yoon JH, Vincent J, Green CE (2018) A sequential multiple assignment randomized trial for cocaine cessation and relapse prevention: tailoring treatment to the individual. Contemp Clin Trials 65(February):109–115. https://doi.org/10/gc3tqr

Seewald NJ, Kidwell KM, Nahum-Shani I, Wu T, McKay JR, Almirall D (2020) Sample size considerations for comparing dynamic treatment regimens in a sequential multiple-assignment randomized trial with a continuous longitudinal outcome. Stat Methods Med Res 29(7):1891–1912. https://doi.org/10/gf85ss

Sherwood NE, Butryn ML, Forman EM, Almirall D, Seburg EM, Lauren Crain A, Kunin-Batson AS, Hayes MG, Levy RL, Jeffery RW (2016) The BestFIT trial: a SMART approach to developing individualized weight loss treatments. Contemp Clin Trials 47(March):209–216. https://doi.org/10.1016/j.cct.2016.01.011

Thall PF, Kyle Wathen J (2005) Covariate-adjusted adaptive randomization in a sarcoma trial with multi-stage treatments. Stat Med 24(13):1947–1964. https://doi.org/10/d5ztnt

Thall PF, Millikan RE, Sung H-G (2000) Evaluating multiple treatment courses in clinical trials. Stat Med 19(8):1011–1028. https://doi.org/10/bmv5jc

Thall PF, Sung H-G, Estey EH (2002) Selecting therapeutic strategies based on efficacy and death in multicourse clinical trials. J Am Stat Assoc 97(457):29–39. https://doi.org/10/dx3fkb

Tsiatis AA, Davidian M, Holloway ST, Laber EB (2019) Dynamic Treatment Regimes: Statistical Methods for Precision Medicine . Monographs on statistics and applied probability 164. CRC Press LLC, Milton

Vock DM, Almirall D (2018) Sequential multiple assignment randomized trial (SMART). In: Balakrishnan N, Colton T, Everitt W, Piegorsch F, Teugels JL (eds) Wiley StatsRef: statistics reference online. https://doi.org/10.1002/9781118445112.stat08073

Wahed AS, Tsiatis AA (2004) Optimal estimator for the survival distribution and related quantities for treatment policies in two-stage randomization designs in clinical trials. Biometrics 60(1):124–133. https://doi.org/10/dc4kfb

Wahed AS, Tsiatis AA (2006) Semiparametric efficient estimation of survival distributions in two-stage randomisation designs in clinical trials with censored data. Biometrika 93(1):163–177. https://doi.org/10/cgchp6

Zhao Y-Q, Laber EB (2014) Estimation of optimal dynamic treatment regimes. Clin Trials 11(4):400–407. https://doi.org/10/f6cjrn

Download references

Acknowledgments

Funding was provided by the National Institutes of Health (P50DA039838, R01DA039901) and the Institute for Education Sciences (R324B180003). Funding for the ExTENd study, which was used to illustrate ideas, was provided by the National Institutes of Health (R01AA014851; PI: David Oslin).

Author information

Authors and affiliations.

University of Michigan, Ann Arbor, MI, USA

Nicholas J. Seewald, Olivia Hackworth & Daniel Almirall

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Nicholas J. Seewald .

Editor information

Editors and affiliations.

Samuel Oschin Comprehensive Cancer Insti, WEST HOLLYWOOD, CA, USA

Steven Piantadosi

Bloomberg School of Public Health, Johns Hopkins Center for Clinical Trials Bloomberg School of Public Health, Baltimore, MD, USA

Curtis L. Meinert

Section Editor information

Statistician, MRC Clinical Trials Unit at UCL, Institute of Clinical Trials and Methodology, London, UK

Babak Choodari-Oskooei

MRC Clinical Trials Unit and Institute of Clinical Trials and Methodology, University College of London, London, England

Mahesh Parmar

The Johns Hopkins Center for Clinical Trials and Evidence Synthesis, Johns Hopkins School of Public Health, Baltimore, MA, USA

Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this entry

Cite this entry.

Seewald, N.J., Hackworth, O., Almirall, D. (2021). Sequential, Multiple Assignment, Randomized Trials (SMART). In: Piantadosi, S., Meinert, C.L. (eds) Principles and Practice of Clinical Trials. Springer, Cham. https://doi.org/10.1007/978-3-319-52677-5_280-1

Download citation

DOI : https://doi.org/10.1007/978-3-319-52677-5_280-1

Received : 09 April 2021

Accepted : 28 April 2021

Published : 12 August 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-52677-5

Online ISBN : 978-3-319-52677-5

eBook Packages : Springer Reference Mathematics Reference Module Computer Science and Engineering

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

A Sequential Adaptive Intervention Strategy Targeting Remission and Functional Recovery in Young People at Ultrahigh Risk of Psychosis: The Staged Treatment in Early Psychosis (STEP) Sequential Multiple Assignment Randomized Trial

Affiliations.

  • 1 Orygen, Melbourne, Victoria, Australia.
  • 2 Centre for Youth Mental Health, The University of Melbourne, Melbourne, Victoria, Australia.
  • 3 Orygen Specialist Program, Melbourne, Victoria, Australia.
  • 4 Department of Psychiatry, Columbia University, New York, New York.
  • 5 Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento.
  • 6 Department of Psychiatry and Behavioral Sciences, University of California, San Francisco.
  • PMID: 37378974
  • PMCID: PMC10308298
  • DOI: 10.1001/jamapsychiatry.2023.1947

Importance: Clinical trials have not established the optimal type, sequence, and duration of interventions for people at ultrahigh risk of psychosis.

Objective: To determine the effectiveness of a sequential and adaptive intervention strategy for individuals at ultrahigh risk of psychosis.

Design, setting, and participants: The Staged Treatment in Early Psychosis (STEP) sequential multiple assignment randomized trial took place within the clinical program at Orygen, Melbourne, Australia. Individuals aged 12 to 25 years who were seeking treatment and met criteria for ultrahigh risk of psychosis according to the Comprehensive Assessment of At-Risk Mental States were recruited between April 2016 and January 2019. Of 1343 individuals considered, 342 were recruited.

Interventions: Step 1: 6 weeks of support and problem solving (SPS); step 2: 20 weeks of cognitive-behavioral case management (CBCM) vs SPS; and step 3: 26 weeks of CBCM with fluoxetine vs CBCM with placebo with an embedded fast-fail option of ω-3 fatty acids or low-dose antipsychotic medication. Individuals who did not remit progressed through these steps; those who remitted received SPS or monitoring for up to 12 months.

Main outcomes and measures: Global Functioning: Social and Role scales (primary outcome), Brief Psychiatric Rating Scale, Scale for the Assessment of Negative Symptoms, Montgomery-Åsberg Depression Rating Scale, quality of life, transition to psychosis, and remission and relapse rates.

Results: The sample comprised 342 participants (198 female; mean [SD] age, 17.7 [3.1] years). Remission rates, reflecting sustained symptomatic and functional improvement, were 8.5%, 10.3%, and 11.4% at steps 1, 2, and 3, respectively. A total of 27.2% met remission criteria at any step. Relapse rates among those who remitted did not significantly differ between SPS and monitoring (step 1: 65.1% vs 58.3%; step 2: 37.7% vs 47.5%). There was no significant difference in functioning, symptoms, and transition rates between SPS and CBCM and between CBCM with fluoxetine and CBCM with placebo. Twelve-month transition rates to psychosis were 13.5% (entire sample), 3.3% (those who ever remitted), and 17.4% (those with no remission).

Conclusions and relevance: In this sequential multiple assignment randomized trial, transition rates to psychosis were moderate, and remission rates were lower than expected, partly reflecting the ambitious criteria set and challenges with real-world treatment fidelity and adherence. While all groups showed mild to moderate functional and symptomatic improvement, this was typically short of remission. While further adaptive trials that address these challenges are needed, findings confirm substantial and sustained morbidity and reveal relatively poor responsiveness to existing treatments.

Trial registration: ClinicalTrials.gov Identifier: NCT02751632 .

Publication types

  • Randomized Controlled Trial
  • Research Support, Non-U.S. Gov't
  • Research Support, N.I.H., Extramural
  • Antipsychotic Agents* / therapeutic use
  • Fluoxetine / therapeutic use
  • Psychotic Disorders* / diagnosis
  • Quality of Life
  • Treatment Outcome
  • Antipsychotic Agents

Associated data

  • ClinicalTrials.gov/NCT02751632

Grants and funding

  • U01 MH105258/MH/NIMH NIH HHS/United States
  • Introduction
  • Conclusions
  • Article Information

Follow-up was through routine National Health Services (NHS) electronic vital status and cancer registry databases for diagnoses and deaths notified by November 17, 2021, but that occurred up to March 31, 2021.

a Practices were randomized prior to invitation to take part in the trial. Randomization details are given in the Randomization subsection of the Methods section.

b Numbers of men are as of November 17, 2021, and are subject to small changes over time because of continued updates from NHS Digital (eg, changes to the trace status of the men, men newly successfully traced). Note that not all men traced at 15 years were traced at 10 years.

c Pseudo-anonymized follow-up.

d NHS digital national data opt-outs (previously type 2 opt-outs) preventing NHS data being used for research. 14

P values are from a random-effects Poisson model (see Statistical Analysis subsection of the Methods section).

Trial Protocol and Statistical Analysis Plan

eTable 1. Prostate cancer-specific diagnoses and mortality and all-cause mortality at 10-years, 15-years and 18-years post-randomization (and at 18 months for prostate cancer diagnoses) by random allocation and an as randomized estimate of the difference between groups

eTable 2. Underlying causes of death in intervention vs control groups at 15-year median follow-up (not including prostate cancer)

eTable 3. Effect of the CAP trial intervention on characteristics of prostate cancer cases at diagnosis

eTable 4. Sensitivity analyses employing alternative definitions of prostate cancer deaths

eTable 5. Estimated mean and median sojourn time and probability of overdiagnosis

eFigure 1. CAP trial design

eFigure 2. Cumulative incidence of prostate cancer by TNM stage at diagnosis

eFigure 3. Cumulative incidence of prostate cancer by Gleason score at diagnosis

eFigure 4. Comparing simulated data to empirical data for the cumulative prostate cancer incidence and cancer-specific and all-other cause mortality risk among the screened men and the unscreened group

eFigure 5. Comparison number of subjects per 100, 000 cohorts at death from all causes by ages between simulated data and CAP data

eFigure 6. Transition diagram for multi-state survival models

CAP Trial Group

Data Sharing Statement

  • USPSTF Recommendation: Screening for Prostate Cancer JAMA US Preventive Services Task Force May 8, 2018 This 2018 updated Recommendation Statement from the US Preventive Services Task Force concludes that clinicians should not screen for prostate cancer in men aged 55 to 69 years who do not express a preference for screening (C recommendation) and recommends against PSA-based screening in men 70 years and older (D recommendation). US Preventive Services Task Force; David C. Grossman, MD, MPH; Susan J. Curry, PhD; Douglas K. Owens, MD, MS; Kirsten Bibbins-Domingo, PhD, MD, MAS; Aaron B. Caughey, MD, PhD; Karina W. Davidson, PhD, MASc; Chyke A. Doubeni, MD, MPH; Mark Ebell, MD, MS; John W. Epling Jr, MD, MSEd; Alex R. Kemper, MD, MPH, MS; Alex H. Krist, MD, MPH; Martha Kubik, PhD, RN; C. Seth Landefeld, MD; Carol M. Mangione, MD, MSPH; Michael Silverstein, MD, MPH; Melissa A. Simon, MD, MPH; Albert L. Siu, MD, MSPH; Chien-Wen Tseng, MD, MPH, MSEE
  • Screening for Prostate Cancer JAMA JAMA Patient Page May 8, 2018 This JAMA Patient Page discusses the US Preventive Services Task Force’s recommendations on screening for prostate cancer in asymptomatic men. Jill Jin, MD, MPH
  • Prostate Cancer Screening With PSA, Kallikrein Panel, and MRI JAMA Preliminary Communication April 6, 2024 This preliminary descriptive report compares the rates of detecting high- and low-grade prostate cancer in men invited for prostate cancer screening vs those not invited to undergo prostate cancer screening. Anssi Auvinen, MD, PhD; Teuvo L. J. Tammela, MD, PhD; Tuomas Mirtti, MD, PhD; Hans Lilja, MD, PhD; Teemu Tolonen, MD, PhD; Anu Kenttämies, MD, PhD; Irina Rinta-Kiikka, MD, PhD; Terho Lehtimäki, MD, PhD; Kari Natunen, MSc; Jaakko Nevalainen, PhD; Jani Raitanen, MSc; Johanna Ronkainen, MD, PhD; Theodorus van der Kwast, MD, PhD; Jarno Riikonen, MD, PhD; Anssi Pétas, MD, PhD; Mika Matikainen, MD, PhD; Kimmo Taari, MD, PhD; Tuomas Kilpeläinen, MD, PhD; Antti S. Rannikko, MD, PhD; ProScreen Trial Investigators; Paula Kujala; Teemu Murtola; Juha Koskimäki; Antti Kaipia; Tomi Pakarainen; Suvi Marjasuo; Juha Oksala; Tuomas Saarinen; Kirsty Ijäs; Into Kiviluoto; Juhani Kosunen; Arja Pauna; Arya Yar; Pekka Ruusuvuori; Neill Booth; Jill Hannus; Sanna Huovinen; Marita Laurila; Johanna Pulkkinen; Mika Tirkkonen; Mona Hassan Al-Battat

See More About

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Others Also Liked

  • Download PDF
  • X Facebook More LinkedIn
  • CME & MOC

Martin RM , Turner EL , Young GJ, et al. Prostate-Specific Antigen Screening and 15-Year Prostate Cancer Mortality : A Secondary Analysis of the CAP Randomized Clinical Trial . JAMA. Published online April 06, 2024. doi:10.1001/jama.2024.4011

Manage citations:

© 2024

  • Permissions

Prostate-Specific Antigen Screening and 15-Year Prostate Cancer Mortality : A Secondary Analysis of the CAP Randomized Clinical Trial

  • 1 Department of Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
  • 2 National Institute for Health Research Bristol Biomedical Research Centre, University Hospitals Bristol and Weston NHS Foundation Trust and University of Bristol, Bristol, United Kingdom
  • 3 MRC Integrative Epidemiology Unit, University of Bristol, Bristol, United Kingdom
  • 4 Health Data Research UK South-West, University of Bristol, Bristol, United Kingdom
  • 5 Nuffield Department of Surgical Sciences, University of Oxford, Oxford, United Kingdom
  • 6 Department of Applied Health Research, University College London, London, United Kingdom
  • 7 Division of Urology, University of Connecticut Health Center, Farmington
  • 8 Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla
  • 9 Department of Radiology, University of California San Diego, La Jolla
  • 10 Department of Bioengineering, University of California San Diego, La Jolla
  • 11 Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston
  • 12 Department of Cellular Pathology, North Bristol NHS Trust, Bristol, United Kingdom
  • 13 Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden
  • 14 School of Medicine, Cardiff University, Cardiff, Wales, United Kingdom
  • US Preventive Services Task Force USPSTF Recommendation: Screening for Prostate Cancer US Preventive Services Task Force; David C. Grossman, MD, MPH; Susan J. Curry, PhD; Douglas K. Owens, MD, MS; Kirsten Bibbins-Domingo, PhD, MD, MAS; Aaron B. Caughey, MD, PhD; Karina W. Davidson, PhD, MASc; Chyke A. Doubeni, MD, MPH; Mark Ebell, MD, MS; John W. Epling Jr, MD, MSEd; Alex R. Kemper, MD, MPH, MS; Alex H. Krist, MD, MPH; Martha Kubik, PhD, RN; C. Seth Landefeld, MD; Carol M. Mangione, MD, MSPH; Michael Silverstein, MD, MPH; Melissa A. Simon, MD, MPH; Albert L. Siu, MD, MSPH; Chien-Wen Tseng, MD, MPH, MSEE JAMA
  • JAMA Patient Page Screening for Prostate Cancer Jill Jin, MD, MPH JAMA
  • Preliminary Communication Prostate Cancer Screening With PSA, Kallikrein Panel, and MRI Anssi Auvinen, MD, PhD; Teuvo L. J. Tammela, MD, PhD; Tuomas Mirtti, MD, PhD; Hans Lilja, MD, PhD; Teemu Tolonen, MD, PhD; Anu Kenttämies, MD, PhD; Irina Rinta-Kiikka, MD, PhD; Terho Lehtimäki, MD, PhD; Kari Natunen, MSc; Jaakko Nevalainen, PhD; Jani Raitanen, MSc; Johanna Ronkainen, MD, PhD; Theodorus van der Kwast, MD, PhD; Jarno Riikonen, MD, PhD; Anssi Pétas, MD, PhD; Mika Matikainen, MD, PhD; Kimmo Taari, MD, PhD; Tuomas Kilpeläinen, MD, PhD; Antti S. Rannikko, MD, PhD; ProScreen Trial Investigators; Paula Kujala; Teemu Murtola; Juha Koskimäki; Antti Kaipia; Tomi Pakarainen; Suvi Marjasuo; Juha Oksala; Tuomas Saarinen; Kirsty Ijäs; Into Kiviluoto; Juhani Kosunen; Arja Pauna; Arya Yar; Pekka Ruusuvuori; Neill Booth; Jill Hannus; Sanna Huovinen; Marita Laurila; Johanna Pulkkinen; Mika Tirkkonen; Mona Hassan Al-Battat JAMA

Question   In men aged 50 to 69 years, does a single invitation for a prostate-specific antigen (PSA) screening test reduce prostate cancer mortality at 15-year follow-up compared with no invitation for testing?

Findings   In this secondary analysis of a randomized clinical trial of 415 357 men aged 50 to 69 years randomized to a single invitation for PSA screening (n = 195 912) or a control group without PSA screening (n = 219 445) and followed up for a median of 15 years, risk of death from prostate cancer was lower in the group invited to screening (0.69% vs 0.78%; mean difference, 0.09%) compared with the control group.

Meaning   Compared with no invitation for routine PSA testing, a single invitation for a PSA screening test reduced prostate cancer mortality at a median follow-up of 15 years, but the absolute mortality benefit was small.

Importance   The Cluster Randomized Trial of PSA Testing for Prostate Cancer (CAP) reported no effect of prostate-specific antigen (PSA) screening on prostate cancer mortality at a median 10-year follow-up (primary outcome), but the long-term effects of PSA screening on prostate cancer mortality remain unclear.

Objective   To evaluate the effect of a single invitation for PSA screening on prostate cancer–specific mortality at a median 15-year follow-up compared with no invitation for screening.

Design, Setting, and Participants   This secondary analysis of the CAP randomized clinical trial included men aged 50 to 69 years identified at 573 primary care practices in England and Wales. Primary care practices were randomized between September 25, 2001, and August 24, 2007, and men were enrolled between January 8, 2002, and January 20, 2009. Follow-up was completed on March 31, 2021.

Intervention   Men received a single invitation for a PSA screening test with subsequent diagnostic tests if the PSA level was 3.0 ng/mL or higher. The control group received standard practice (no invitation).

Main Outcomes and Measures   The primary outcome was reported previously. Of 8 prespecified secondary outcomes, results of 4 were reported previously. The 4 remaining prespecified secondary outcomes at 15-year follow-up were prostate cancer–specific mortality, all-cause mortality, and prostate cancer stage and Gleason grade at diagnosis.

Results   Of 415 357 eligible men (mean [SD] age, 59.0 [5.6] years), 98% were included in these analyses. Overall, 12 013 and 12 958 men with a prostate cancer diagnosis were in the intervention and control groups, respectively (15-year cumulative risk, 7.08% [95% CI, 6.95%-7.21%] and 6.94% [95% CI, 6.82%-7.06%], respectively). At a median 15-year follow-up, 1199 men in the intervention group (0.69% [95% CI, 0.65%-0.73%]) and 1451 men in the control group (0.78% [95% CI, 0.73%-0.82%]) died of prostate cancer (rate ratio [RR], 0.92 [95% CI, 0.85-0.99]; P  = .03). Compared with the control, the PSA screening intervention increased detection of low-grade (Gleason score [GS] ≤6: 2.2% vs 1.6%; P  < .001) and localized (T1/T2: 3.6% vs 3.1%; P  < .001) disease but not intermediate (GS of 7), high-grade (GS ≥8), locally advanced (T3), or distally advanced (T4/N1/M1) tumors. There were 45 084 all-cause deaths in the intervention group (23.2% [95% CI, 23.0%-23.4%]) and 50 336 deaths in the control group (23.3% [95% CI, 23.1%-23.5%]) (RR, 0.97 [95% CI, 0.94-1.01]; P  = .11). Eight of the prostate cancer deaths in the intervention group (0.7%) and 7 deaths in the control group (0.5%) were related to a diagnostic biopsy or prostate cancer treatment.

Conclusions and Relevance   In this secondary analysis of a randomized clinical trial, a single invitation for PSA screening compared with standard practice without routine screening reduced prostate cancer deaths at a median follow-up of 15 years. However, the absolute reduction in deaths was small.

Trial Registration   isrctn.org Identifier: ISRCTN92187251

In England, the number of men diagnosed with prostate cancer increased by 68% from 28 216 in 2001 to 47 479 in 2019, 1 reflecting population aging and increased prostate-specific antigen (PSA) testing. 2 In the US, approximately 3.3 million men currently live with a diagnosis of prostate cancer. 3 While low-risk prostate cancer progresses slowly and is associated with a low risk of mortality, 4 - 7 aggressive prostate cancer currently causes approximately 12 000 deaths in the UK and 34 700 deaths in the US annually. 3 , 8 The goal of PSA screening is to reduce prostate cancer mortality by early detection of curable disease. However, uncertainty remains regarding the long-term effect of PSA-based screening on mortality. 9 - 11

The Cluster Randomized Trial of PSA Testing for Prostate Cancer (CAP) (N = 415 357) showed that compared with usual care (no screening), an invitation to a single PSA screening test increased the number of prostate cancers diagnosed during the first 18 months of follow-up (the period when PSA testing and subsequent biopsies for men with an elevated level of PSA took place). In this trial, rates of diagnosed prostate cancer in the first 18 months were 2.2 per 1000 person-years in the control group and 10.4 per 1000 person-years in the intervention group ( P  < .001). 10 However, at a median 10-year follow-up, the invitation for a single PSA screening did not reduce prostate cancer mortality compared with the control (0.29% vs 0.30%; rate ratio, 0.96 [95% CI, 0.85-1.08]; P  = .50). 10 This secondary analysis of the CAP trial describes the effects of this single invitation to a PSA screening test, with subsequent diagnostic tests if the PSA level was 3.0 ng/mL or higher (to convert to micrograms per liter, multiply by 1.0), on the prespecified secondary outcome of prostate-cancer mortality at 15-year follow-up compared with standard practice (no screening). 12

The Derby National Research Ethics Service Committee East Midlands approved this study ( ISRCTN92187251 ). The trial protocol and statistical analysis plan are available in Supplement 1 . Participants were enrolled between January 8, 2002, and January 20, 2009. Final follow-up occurred on March 31, 2021. Men who attended PSA testing in the intervention group gave individual written informed consent via the ProtecT study. 13 Individual consent was not sought from men in the control group or from nonresponders in the intervention group. Instead, approval for their identification and linkage to routine electronic records was obtained under Section 251 of the National Health Services Act 2006 from the UK Patient Information Advisory Group (now Confidentiality Advisory Group). 10 All clinical centers had local research governance approval. This study followed the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. Study data were collected using REDCap electronic data capture tools hosted at the University of Bristol.

The CAP trial was a primary care–based cluster randomized clinical trial that tested the effects of a single invitation for a PSA screening test (eFigure 1 in Supplement 2 ) compared with usual care (no screening) on the primary outcome of prostate cancer mortality at a median follow-up of 10 years. The primary outcome has been reported previously. 10 Between September 25, 2001, and August 24, 2007, 785 eligible general practices in the catchment area of 8 hospitals across England and Wales (located in Birmingham, Bristol, Cambridge, Cardiff, Leeds, Leicester, Newcastle, and Sheffield) were randomized before recruitment (Zelen design) to intervention or control groups, and practices were invited to consent to participate. Randomization was blocked and stratified within groups of 10 to 12 neighboring practices using a computerized random number generator. Because allocation preceded the invitation for practices to participate, it was not possible to conceal allocation. A total of 573 practices (73%), including 271 (68%) randomized to the intervention group and 302 (78%) randomized to the control group, agreed to participate ( Figure 1 ).

Men aged 50 to 69 years in each participating randomized general practice were included. Men with prostate cancer on or before the randomization date and those registered as a patient with participating practices on a temporary or emergency basis were excluded. Race and ethnicity for men attending the intervention group PSA test clinic were ascertained by a nurse using standardized definitions as one of a range of baseline characteristics to assess generalizability. 13 Race and ethnicity were defined using UK Office for National Statistics Census categories and recoded as White and other (all other categories collapsed due to low numbers of participants who were not White). Race and ethnicity data were not available from National Health Services (NHS) routine data that we had access to at the time, so we could not compute these data for the control group.

Quiz Ref ID Men in practices randomized to the intervention received a single invitation for a PSA test after counseling. If the resulting PSA level was 3.0 to 19.9 ng/mL, they were offered 10-core transrectal ultrasonography-guided biopsies. All laboratories participated in the UK National External Quality Assessment Service for PSA testing. Test results that did not meet laboratory quality assurance requirements or were lost were considered nonvalid, as were tests for which consent was ambiguous or insufficient blood was obtained. Men in the intervention group diagnosed with localized prostate cancer were invited to participate in a second randomized clinical trial, the ProtecT treatment trial, which randomized participants to active monitoring (consisting of regular PSA testing and clinical review), radical prostatectomy, or radical conformal radiotherapy with neoadjuvant androgen deprivation (eFigure 1 in Supplement 2 ). 15 Men with a PSA level of 20 ng/mL or higher were referred to a urologist and received standard care.

Men in practices randomized to the control group received standard NHS management but did not receive a formal invitation for PSA testing as part of this study. 16 We assessed cumulative PSA testing for prostate cancer detection in the control group of the CAP trial by longitudinal analysis of a national primary care database (434 236 men from 558 UK primary care practices). 2

The primary outcome of the CAP trial, 10-year prostate cancer mortality, was reported previously. 10 Prespecified secondary outcomes were definite or probable prostate cancer mortality at 15-year follow-up, all-cause mortality at 10-year follow-up, all-cause mortality at 15-year follow-up, all-cause mortality at 5-year follow-up, prostate cancer mortality at 5-year follow-up, disease grade and staging, cost-effectiveness, and health-related quality of life. The protocol did not indicate the time point for assessing prostate cancer grade and staging; these were measured at median follow-up time points of 10 years and 15 years. Previously reported outcomes were all-cause mortality at 10-year follow-up, 10 disease grade and stage at 10-year follow-up, 10 cost-effectiveness, 17 and health-related quality of life. 18 The current report provides results for the remaining secondary outcomes of definite or probable prostate cancer mortality at 15-year follow-up, all-cause mortality at 15-year follow-up, and disease grade and stage at 15-year follow-up. All-cause and prostate cancer mortality at 5-year follow-up were not published separately, but 5-year follow-up data are shown in Kaplan-Meier curves both in the current report and in the publication of the 10-year primary outcome. 10

Prostate cancer mortality at 15-year follow-up was ascertained with death certificates from the Office for National Statistics at NHS England and adjudicated by an independent cause of death evaluation committee using clinical information from hospital medical records and following a standardized protocol. 19 , 20 Prostate cancer stage and Gleason grade were obtained from the National Disease Registration Service 21 (formerly Public Health England) at NHS England and Public Health Wales 22 up to December 31, 2020.

Additional outcomes reported here that were described in the published original statistical analysis plan 10 were (1) mean age at diagnosis in the allocated groups and (2) a sensitivity analysis redefining the primary outcome. This analysis included (1) definite, probable, possible, and treatment-related prostate cancer mortality and (2) definite and treatment-related prostate cancer mortality.

We estimated differences in the risk of prostate cancer diagnosis between the intervention and control groups at 18 months, 10 years, and 15 years to quantify changes in diagnosis rates over long-term follow-up. We calculated mean sojourn time (the period in which a tumor is asymptomatic but detectable by screening) from microsimulation using estimated transition parameters for single episodes of screening between ages 50 to 69 years and overdiagnosis rates as the difference in the cumulative prostate cancer incidence between screened and unscreened groups over a lifetime (further methodologic details are given in the eMethods in Supplement 2 ). 23 , 24

The intervention effect at a median 15-year follow-up (March 31, 2021) was analyzed comparing groups as randomized using random-effects Poisson regression to estimate prostate cancer–specific and all-cause mortality rate ratios (RRs) in intervention vs control practices, allowing for clustering within primary care practices and randomization strata. To allow for variation in the incidence of prostate cancer with age, follow-up for each participant was divided into periods within 5-year age groups. We present rates (per 1000 person-years) and Kaplan-Meier estimates of the cumulative risk (per 100 men) of prostate cancer diagnosis and prostate cancer and all-cause mortality.

In prespecified analyses described in the original statistical analysis plan ( Supplement 1 ), we used instrumental methods (generalized method of moments estimator) to estimate the effect of attending the PSA screening clinic at a median 15-year follow-up compared with men in the control group who would have attended the clinic if invited, adjusting for age group and using robust SEs to allow for variation between practices. We also compared mean age, prostate cancer clinical stage (T1/T2, T3, and T4/N1/M1 disease), and Gleason score (6 [low-grade], 7 [intermediate grade], and ≥8 [high grade]) at diagnosis between the intervention and control groups using ordered logistic regression.

Prespecified subgroup analyses investigated variation in the effect of screening on prostate cancer mortality by baseline age group and quintiles of the geographic area–based Index of Multiple Deprivation, a measure of socioeconomic status. An interaction test P value was used to evaluate the evidence against the null hypothesis of equal intervention effect across subgroups.

In accordance with our original analysis plan, 10 we did not conduct multiple imputation analyses. The statistical analysis plan did not specify an intention to adjust P values for multiple comparisons; conventional adjustments assume statistical independence between estimates, which was not the case for analyses of the same outcome at 10 and 15 years. All statistical testing was for superiority, and P values were 2-sided. In interpreting the results, we focused on estimated effects and associated 95% CIs. Results were considered statistically significant if the P value was <.05 or not statistically significant if the P value was ≥.05. All trial analyses were conducted using Stata, version 16.1 (StataCorp LLC).

A total of 911 primary care practices were randomized in 99 geographic areas. Of these, 126 were subsequently excluded as ineligible ( Figure 1 ). 12 Consent rates were 68% (271 of 398) among eligible practices in the intervention group and 78% (302 of 387) among eligible practices in the control group. Overall, 415 357 men (mean [SD] age, 59.0 [5.6] years) registered with these practices were eligible for the intervention (n = 195 912) and control (n = 219 445) groups. Follow-up data for cancer diagnosis and mortality at a median of 15 years after randomization were available for 408 721 of the eligible men (98%), including 189 326 (97%) randomized to the intervention and 219 395 (>99%) randomized to control ( Figure 1 ). In the intervention group, 98% of patients were White and 2% were other race and ethnicity; data were not available for the control group.

Baseline characteristics were similar between intervention and control groups at the practice and individual levels ( Table 1 ). Among people randomized to the intervention who developed prostate cancer (n = 12 013), 9% were missing data for cancer stage and 10% were missing data for Gleason grade. Among people randomized to the control group who developed prostate cancer (n = 12 958), 8% were missing data for cancer stage and 11% were missing data for Gleason grade.

Overall, 75 694 men (40%) men randomized to the intervention group underwent PSA testing, and 64 425 (34%) had a valid (as defined in the Methods) test result. Of these, 6855 (11%) had a PSA value between 3 and 19.9 ng/mL and were eligible for the ProtecT trial. Of these, 5848 (85%) had a prostate biopsy. Cumulative PSA testing for prostate cancer detection in the control group was indirectly estimated at 10% to 15% over a median 10-year follow-up. 2 , 10

After a median follow-up of 15.4 years (IQR, 14.2-16.4; range, 12.2-19.2), there were 1199 deaths due to prostate cancer (rate, 0.47 [95% CI, 0.45-0.50] per 1000 person-years) in the intervention group and 1451 deaths (rate, 0.50 [95% CI, 0.48-0.53] per 1000 person-years) in the control group (RR, 0.92 [95% CI, 0.85-0.99]; P  = .03) ( Table 2 and Figure 2 A). At a median 15-year follow-up, the cumulative risks of prostate cancer mortality were 0.69% (95% CI, 0.65%-0.73%) in the intervention group and 0.78% (95% CI, 0.73%-0.82%) in the control group (risk difference, −0.09% [95% CI, −0.15% to −0.03%]; P  = .02) ( Table 2 and eTable 1 in Supplement 2 ). Using instrumental variable analysis, the prostate cancer mortality RR for the effect of screening among men attending PSA testing clinics was 0.83 (95% CI, 0.68-1.00; P  = .053) ( Table 2 ).

There were 45 084 total deaths (23.2% [95% CI, 95% CI, 23.0%-23.4%]) in the intervention group and 50 336 total deaths (23.3% [95% CI, 23.1%-23.5%]) in the control group (RR, 0.97 [95% CI, 0.94-1.01]; P  = .11) ( Table 2 and Figure 2 B). Other causes of death were similar between the 2 groups (eTable 2 in Supplement 2 ).

Compared with the control group, men in the intervention group were at higher risk of diagnosis of low-grade prostate cancer (2.2% vs 1.6%; risk difference, 0.58% [95% CI, 0.50%-0.67%]) and at lower risk of high-grade prostate cancers (1.2% vs 1.3%; risk difference, −0.15% [95% CI, −0.22% to −0.08%]) over the 15-year follow-up ( P  < .001 for trend). There was a higher risk of localized prostate cancers (3.6% vs 3.1%; risk difference, 0.56% [95% CI, 0.44%-0.67%]) and a lower risk of advanced-stage tumors (0.9% vs 1.1%; risk difference, −0.16% [95% CI, −0.22% to −0.10%]) over the 15-year follow-up in the intervention group vs control group ( P  < .001 for trend) (eTable 3 and eFigures 2 and 3 in Supplement 2 ).

The mortality results were similar when including in the outcome definition prostate cancer–specific deaths judged as possible by the cause of death evaluation committee and when restricting to those judged as definite prostate cancer–specific deaths (eTable 4 in Supplement 2 ). There was little evidence that the intervention effect differed by age group or socioeconomic status ( P  ≥ .46 for interaction) ( Table 3 ). Compared with the control group, intervention group men were a mean 1.22 years (95% CI, 1.02-1.42 years; P  < .001) younger at prostate cancer diagnosis (eTable 3 in Supplement 2 ).

After a median 15-year follow-up, there were 12 013 prostate cancer diagnoses in the intervention group (4.88 [95% CI, 4.80-4.97] per 1000 person-years; cumulative risk, 7.08% [95% CI, 6.95%-7.21%]) and 12 958 in the control group (4.60 [95% CI, 4.52-4.68] per 1000 person-years; cumulative risk, 6.94% [95% CI, 6.82%-7.06%]) ( Table 2 and Figure 2 C). Differences in the risks of prostate cancer diagnosis between the intervention and control groups varied markedly during follow-up. Cumulative risk differences per 1000 men for the intervention vs control groups were 12.23 (95% CI, 11.63-12.84) at 18 months, 4.80 (95% CI, 3.53-6.07) at 10 years, 1.38 (95% CI, −0.38 to 3.14) at 15 years, and 0.86 (95% CI, −1.80 to 3.53) at 18 years (eTable 1 in Supplement 2 ).

For the group aged 50 to 54 years compared with the group aged 65 to 69 years, the mean sojourn time was 12.1 years (95% CI, 12.1-12.2 years) vs 15.3 years (95% CI, 15.2-15.3 years). The mean probability of overdiagnosis in these groups was 9.2% (95% CI, 8.9%-9.4%) vs 20.8% (95% CI, 20.6%-21.0%) (eTable 5 and eFigures 4-6 in Supplement 2 ).

Among the deaths due to prostate cancer, 8 in the intervention group (0.7%) and 7 in the control group (0.5%) were related to a diagnostic biopsy or prostate cancer treatment. 10 Other adverse events were reported previously. 9 , 11

Quiz Ref ID In this secondary analysis of a cluster randomized clinical trial of 415 357 men aged 50 to 69 years, compared with usual care, a single invitation to undergo a PSA test led to an absolute reduction in prostate cancer mortality of 0.09% after a median follow-up of 15 years. However, the magnitude of the effect was small. There was no effect on overall survival. Policymakers considering screening for prostate cancer should consider this small reduction in deaths against the potential adverse effects associated with overdiagnosis and overtreatment of prostate cancer. 6 , 25

The CAP trial previously reported no benefit of a single invitation to PSA screening on the primary outcome of prostate cancer mortality at a median follow-up of 10 years. 10 Prostate-specific antigen testing is increasingly common, 2 particularly among men older than 60 years, 2 , 26 and definitive evidence on the benefits and harms of PSA screening remain unclear. 25 Analyses reported here are important because of the need for a longer follow-up period to evaluate the effect of PSA detection of prostate cancers, 5 particularly as findings from the ProtecT trial showed no difference in mortality irrespective of treatment over 15 years. 6

The magnitude of reduction in prostate cancer mortality was smaller than the a priori defined effect size considered important for clinical and public health benefit. 12 The harms of PSA testing include overdiagnosis; biopsy complications 9 ; adverse treatment effects on urinary, sexual, and bowel function 11 ; and the potential to miss an aggressive prostate cancer. 10 The clinical trial’s single invitation to a PSA screening test aimed to minimize overdiagnosis and overtreatment compared with other screening trials, but overdiagnosis was still observed after a median 15-year follow-up. The European Randomized Study of Prostate Cancer Screening (ERSPC) randomized clinical trial (N = 162 243), which combined data from 7 centers with different protocols and screening strategies, reported that PSA screening conducted every 2 to 4 years (mean of 1.4 tests per participant) reduced prostate cancer mortality after 16 years (RR, 0.80 [95% CI, 0.72-0.89]). 27 The Prostate, Lung, Colorectal and Ovarian (PLCO) randomized clinical trial (N = 76 683) reported little evidence of prostate cancer mortality benefit after 17 years with annual PSA testing compared with usual care (RR, 0.93 [95% CI, 0.81-1.08]) 28 but was limited by high rates of PSA testing in the control group (mean of 2.7 routine PSA tests over the trial’s 6-year intervention period 29 ) and only 35% adherence to recommendations for diagnostic biopsy. 30 The Stockholm clinical trial compared 1-time PSA screening and diagnostic investigations if the PSA level was higher than 10 ng/mL with an unscreened control group. 31 It demonstrated overdiagnosis of prostate cancer (persistent excess in cumulative prostate cancer incidence in the screening intervention group throughout follow-up), without reduced prostate cancer mortality after 20-year follow-up. Multiple screening tests implemented in the ERSPC and PLCO trials increased overdiagnosis, 32 with evidence of a strong positive correlation between the extent of the absolute prostate cancer mortality reduction achieved by the screening intervention and the extent of overdiagnosis (quantified as the risk difference in cumulative incidence of prostate cancer between the trial arms). 33

This study has several strengths. First, compared with randomizing individual patients, recruitment in general practice clusters is expected to minimize volunteer bias and reduce contamination in the control group, in which the intervention effects also cause greater screening in the control group. Cumulative PSA testing in the control arm of this clinical trial was indirectly estimated at 10% to 15% over the median 10-year follow-up, consistent with current UK policy not to recommend screening. A priori estimates suggested that the effect on statistical power of ever undergoing PSA testing during follow-up in the control group (contamination) would be minimal unless the PSA testing rate reached 20%. 12 Second, all practices followed the same screening and diagnosis protocol, providing consistent results. Third, among those with an elevated PSA level, adherence with recommendations for biopsy was high at 85%, similar to the rate in the ERSPC trial (81%) and higher than the rate in the PLCO trial (35%). This feature of the clinical trial would likely improve the potential effectiveness of screening, which depends on patients’ willingness to undergo subsequent diagnostic tests. Fourth, the large sample size of this trial contributed to excellent statistical power to detect a clinically meaningful effect size (a prostate cancer mortality RR of 0.87), assuming that PSA testing in the intervention arm was between 35% and 50% and that less than 20% of the control group had PSA testing. 12 Fifth, the comprehensive national electronic health record linkage of all the men in this clinical trial helped attain a follow-up rate of 98% over the median 15-year follow-up period.

This study has several limitations. First, the screening intervention involved a single invitation for a PSA screening test, which is not typical of organized screening programs. Some advanced prostate cancers that might have been identified in subsequent screening rounds were likely missed. Second, NHS electronic records were used to identify prostate cancer, resulting in missing data for clinical characteristics and possible delays in recording diagnoses. Third, prostate cancer mortality at 15 years was a secondary outcome. Quiz Ref ID Fourth, since this clinical trial began, newer diagnostic methods 34 and more effective treatments for advanced and metastatic prostate cancer 35 have been identified. Fifth, few Black men, who are at higher risk of prostate cancer, were included. 36

In this secondary analysis of a randomized clinical trial, a single invitation for PSA screening compared with standard practice without routine screening reduced prostate cancer deaths at a median follow-up of 15 years. However, the absolute reduction in deaths was small.

Accepted for Publication: February 29, 2024.

Published Online: April 6, 2024. doi:10.1001/jama.2024.4011

Corresponding Author: Richard M. Martin, BM, BS, PhD, Department of Population Health Sciences, Bristol Medical School, University of Bristol, Bristol BS8 1UD, United Kingdom ( [email protected] ).

Author Contributions: Dr Turner and Ms Young had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Dr Turner, Ms Young, and Drs Neal, Hamdy, and Donovan contributed equally.

Concept and design: Martin, Turner, Sterne, Noble, Ben-Shlomo, Adolfsson, Davey Smith, Neal, Hamdy, Donovan.

Acquisition, analysis, or interpretation of data: Martin, Turner, Young, Metcalfe, Walsh, Lane, Sterne, Holding, Ben-Shlomo, Williams, Pashayan, Bui, Albertsen, Seibert, Zietman, Oxley, Adolfsson, Mason, Neal, Hamdy, Donovan.

Drafting of the manuscript: Martin, Turner, Young, Sterne, Pashayan, Bui, Davey Smith, Donovan.

Critical review of the manuscript for important intellectual content: Martin, Turner, Metcalfe, Walsh, Lane, Sterne, Noble, Holding, Ben-Shlomo, Williams, Pashayan, Albertsen, Seibert, Zietman, Oxley, Adolfsson, Mason, Davey Smith, Neal, Hamdy, Donovan.

Statistical analysis: Turner, Young, Metcalfe, Sterne, Pashayan, Bui.

Obtained funding: Martin, Turner, Metcalfe, Noble, Pashayan, Neal, Hamdy, Donovan.

Administrative, technical, or material support: Turner, Walsh, Lane, Holding, Williams, Albertsen, Oxley.

Supervision: Martin, Turner, Metcalfe, Sterne, Ben-Shlomo, Pashayan, Zietman, Adolfsson, Mason, Davey Smith, Donovan.

Conflict of Interest Disclosures: Dr Lane reported receiving grants from the UK National Institute for Health and Care Research (NIHR) during the conduct of the study. Dr Sterne reported receiving grants from the NIHR and Health Data Research UK during the conduct of the study. Dr Seibert reported receiving personal fees from Varian Medical Systems; receiving personal fees from, having stock options in, and serving on the scientific advisory board for Cortechs; and receiving grants to the institution from GE Healthcare outside the submitted work. Dr Adolfsson reported being a member of the National Screening Council giving advice on screening to the Swedish National Board of Health and Welfare. Dr Davey Smith reported receiving grants from the Medical Research Council during the conduct of the study and serving on the scientific advisory board for Relation Therapeutics and Insitro outside the submitted work. Dr Hamdy reported receiving grants from the NIHR during the conduct of the study and receiving personal fees from British Journal of Urology International and the UK Policy Advisory Board, Intuitive Surgical and grants from Prostate Cancer UK outside the submitted work. Dr Donovan reported receiving grants from the NIHR for the linked ProtecT study. No other disclosures were reported.

Funding/Support: The Cluster Randomized Trial of PSA Testing for Prostate Cancer (CAP) was funded by grants C11043/A4286, C18281/A8145, C18281/A11326, C18281/A15064, and C18281/A24432 from Cancer Research UK. The UK Department of Health, NIHR provided partial funding. The ProtecT trial was funded by project grants 96/20/06 and 96/20/99 from the NIHR Health Technology Assessment Programme. The NIHR Oxford Biomedical Research Centre provided support through the Surgical Innovation and Evaluation Theme and the Surgical Interventional Trials Unit and Cancer Research UK through the Oxford Cancer Research Centre. Drs Martin and Sterne are supported in part by the NIHR Bristol Biomedical Research Centre, which is funded by the NIHR and is a partnership between University Hospitals Bristol NHS Trust, Weston NHS Foundation Trust, and the University of Bristol. Ms Young is supported in part by the Bristol Trials Centre. Dr Martin is an NIHR Senior Investigator (NIHR202411) and is supported by a Cancer Research UK Programme Grant, the Integrative Cancer Epidemiology Programme (C18281/A29019).

Role of the Funder/Sponsor: The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Group Information: The members of the CAP Trial Group are listed in Supplement 3 .

Disclaimer: The views expressed in this publication are those of the authors and not necessarily those of the NIHR or the UK Department of Health and Social Care. The Office for National Statistics bears no responsibility for the analysis and interpretation of data provided.

Meeting Presentation: This work was presented at the 39th annual congress of the European Association of Urology; April 6, 2024; Paris, France.

Data Sharing Statement: See Supplement 4 .

Additional Contributions: We thank NHS England (formerly NHS Digital), the Office for National Statistics, Public Health Wales, and Public Health England National Cancer Registration and Analysis Service (South-West) for their assistance with this study; the CAP and ProtecT study participants; the CAP trial general practitioners and primary care practice staff; and the ProtecT study research group. Chris Pawsey, BA (Hons); Genevieve Hatton-Brown, BA (Hons); and Tom Steuart-Feilding, BA (Hons), provided administrative support during the trial as part of their employment at the University of Bristol.

Additional Information: This work uses data that have been provided by patients and collected by the NHS as part of their care and support. The data are collated, maintained, and quality assured by the National Disease Registration Service, which is part of NHS England.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Psychiatry

Logo of bmcpsyc

A Sequential Multiple Assignment Randomized Trial (SMART) study of medication and CBT sequencing in the treatment of pediatric anxiety disorders

Bradley s. peterson.

1 Children’s Hospital Los Angeles, Los Angeles, CA USA

2 Department of Psychiatry, Keck School of Medicine at The University of Southern California, Los Angeles, USA

Amy E. West

3 Department of Pediatrics, Keck School of Medicine at the University of Southern California, Los Angeles, USA

John R. Weisz

4 Department of Psychology, Harvard University, Cambridge, USA

Wendy J. Mack

5 Department of Preventive Medicine, Keck School of Medicine at The University of Southern California, Los Angeles, USA

Michele D. Kipke

Robert l. findling.

6 Virginia Commonwealth University, Richmond, USA

Brian S. Mittman

7 Department of Research & Evaluation, Kaiser Permanente, Los Angeles, USA

Ravi Bansal

Steven piantadosi.

8 Brigham And Women’s Hospital, Harvard Medical School, Boston, USA

Glenn Takata

Corinna koebnick.

9 Children’s Bureau of Southern California, Los Angeles, USA

Christopher Snowdy

Marie poulsen, bhavana kumar arora, courtney m. allem, marisa perez.

10 Hathaway-Sycamores Child and Family Services, Altadena, USA

Stephanie N. Marcy

Bradley o. hudson, stephanie h. chan.

11 LifeStance Health California, Encinitas, USA

Robin Weersing

12 SDSU-UC San Diego Joint Doctoral Program in Clinical Psychology, San Diego State University, San Diego, USA

Associated Data

Full public access of the entire de-identified, participant-level dataset and statistical code will be made available 1 year after the report of primary outcomes is published.

Treatment of a child who has an anxiety disorder usually begins with the question of which treatment to start first, medication or psychotherapy. Both have strong empirical support, but few studies have compared their effectiveness head-to-head, and none has investigated what to do if the treatment tried first isn’t working well—whether to optimize the treatment already begun or to add the other treatment.

This is a single-blind Sequential Multiple Assignment Randomized Trial (SMART) of 24 weeks duration with two levels of randomization, one in each of two 12-week stages. In Stage 1, children will be randomized to fluoxetine or Coping Cat Cognitive Behavioral Therapy (CBT). In Stage 2, remitters will continue maintenance-level therapy with the single-modality treatment received in Stage 1. Non-remitters during the first 12 weeks of treatment will be randomized to either [1] optimization of their Stage 1 treatment, or [2] optimization of Stage 1 treatment and addition of the other intervention. After the 24-week trial, we will follow participants during open, naturalistic treatment to assess the durability of study treatment effects. Patients, 8–17 years of age who are diagnosed with an anxiety disorder, will be recruited and treated within 9 large clinical sites throughout greater Los Angeles. They will be predominantly underserved, ethnic minorities. The primary outcome measure will be the self-report score on the 41-item youth SCARED (Screen for Child Anxiety Related Disorders). An intent-to-treat analysis will compare youth randomized to fluoxetine first versus those randomized to CBT first (“Main Effect 1”). Then, among Stage 1 non-remitters, we will compare non-remitters randomized to optimization of their Stage 1 monotherapy versus non-remitters randomized to combination treatment (“Main Effect 2”). The interaction of these main effects will assess whether one of the 4 treatment sequences (CBT➔CBT; CBT➔med; med➔med; med➔CBT) in non-remitters is significantly better or worse than predicted from main effects alone.

Findings from this SMART study will identify treatment sequences that optimize outcomes in ethnically diverse pediatric patients from underserved low- and middle-income households who have anxiety disorders.

Trial registration

This protocol, version 1.0, was registered in ClinicalTrials.gov on February 17, 2021 with Identifier: {"type":"clinical-trial","attrs":{"text":"NCT04760275","term_id":"NCT04760275"}} NCT04760275 .

Anxiety disorders are highly prevalent in children, adolescents, and young adults. Approximately 12.3% of children meet formal diagnostic criteria for an anxiety disorder by age 12, and an additional 11% meet criteria by age 18, most commonly social anxiety, separation anxiety, and generalized anxiety disorders [ 1 ]. Most adult anxiety disorders begin in childhood or adolescence [ 2 – 4 ], suggesting that early intervention may reduce anxiety prevalence in adults, attenuate the frequent worsening of symptoms and lessen the associated impairment over time [ 1 , 5 , 6 ]. Early treatment of pediatric anxiety disorders may also mitigate the development or functional impact of common comorbid disorders [ 7 , 8 ], including depression [ 1 ], Attention-Deficit/Hyperactivity Disorder (ADHD) [ 9 ], oppositional defiant or conduct disorder [ 10 ], substance abuse [ 5 , 11 , 12 ], and suicide attempts [ 13 , 14 ]. More broadly, anxiety disorders confer considerable risk for lifetime impairments in overall quality of life, interpersonal relationships, physical health, finances, and academic and occupational functioning [ 1 , 6 ]. They are strongly associated with years of life lost and lived in disability, especially when beginning in childhood [ 15 ].

Systematic reviews and meta-analyses have consistently shown that pediatric anxiety disorders, like their adult counterparts, are chronic, recurrent, and unstable in their diagnostic classification over time, with new anxiety disorders appearing together with or replacing the initial, primary diagnosis [ 16 , 17 ], suggesting strongly that clinical trials should consider multiple anxiety disorders and aggregate symptom outcome measures. Systematic reviews and meta-analyses have also documented substantial therapeutic effects for both psychotherapy, particularly CBT (Cognitive Behavioral Therapy) [ 18 , 19 ], and medication, especially SSRIs (selective serotonin reuptake inhibitors) [ 20 – 22 ]. While response rates to acute treatment near 60% for both CBT [ 23 ] and SSRIs [ 21 , 22 ] may seem encouraging, these rates also mean that approximately 40% fail to respond. Remission rates are lower, 40–50% for both CBT [ 24 ] and SSRIs [ 21 , 22 ], and even poorer in real-world, community settings (20–40%) [ 25 – 27 ]. Long-term follow-up in pediatric studies is rare, but data suggest that outcomes are poor and relapses common, similar to what is observed in adult anxiety disorders, where relapse approaches 60% [ 16 , 28 ]. Even after gold-standard treatments in the CAMS study (Child/Adolescent Anxiety Multimodal Study) – the largest and most rigorous combined medication and CBT trial thus far – 30% were chronic non-responders and an additional 50% relapsed at least once in 4 years, despite receiving post-trial treatment [ 29 ]. Racial/ethnic minorities in the CAMS study had significantly lower remission rates in all treatment arms [ 30 ]. Long-term outcomes did not vary according to initial treatment with CBT, medication, or their combination. Predictors of poor acute treatment outcomes include more severe symptoms, more functional impairment [ 29 , 31 ], low socioeconomic status (SES) [ 32 ], and a primary diagnosis of social phobia [ 31 , 32 ]. More residual symptoms and functional impairment following acute treatment predict relapse, suggesting that treatment should aim to achieve remission from all anxiety disorders, and with as few residual symptoms of any kind of anxiety as possible [ 29 ]. These predictors of relapse have informed our requirements for, and definition of, clinical remission during this trial.

Gaps in evidence this study will fill

Although many studies have separately evaluated the efficacy and effectiveness of medication and CBT in the treatment of pediatric anxiety disorders, very few have compared the effectiveness of CBT and medication head-to-head [ 33 ] (the CAMS study did, but it was an efficacy study conducted in academic centers rather than in “real world” community clinics [ 10 ]). No studies have assessed whether it is preferable to begin with CBT and then add medication if needed, or begin with medication and then add CBT if needed. Our preliminary survey of clinicians, patients, and parents indicates that treatment of essentially every child with anxiety disorder begins with the vexing question of which modality to begin first. Considerations include: challenges in finding skilled CBT therapists; inconvenience of getting the child to weekly therapy; often-greater expense of CBT from more frequent co-pays; and resistance of the child (and sometimes a parent) to the structure, effort, and discomfort that CBT requires (e.g., in response to planned anxiety exposures). Modality decisions are also influenced by the wish to avoid medication in a child [ 34 ], with its under-studied, long-term side effects [ 35 ], and concern about relapse after discontinuation [ 20 , 36 , 37 ]; the perception that medication may be less emotionally supportive than psychotherapy; the stigma often associated with medication [ 38 ]; or the perception that medication is a “crutch” that instills physical and psychological dependency [ 39 ], unlike the skills that CBT aims to impart. Ethnic minorities often have a cultural aversion to medication, particularly for mental health problems, and especially in children [ 39 , 40 ], but they also tend to view CBT as less feasible for their lifestyle [ 40 ], thereby limiting their treatment adherence [ 41 ]. Complicating the selection of initial treatment is the absence of data indicating whether sequencing of treatment impacts treatment satisfaction, well-being, peer relationships, parent and family functioning, school functioning, or comorbid psychiatric symptoms [ 5 , 42 – 44 ]. Finally, few studies have adequately tested the effectiveness of either treatment modality in the complex and underserved families who are likely disproportionately affected by anxiety disorders. Empirical data for the effectiveness of anxiety treatments in underserved families are particularly sparse for patient-centered outcomes and for the head-to-head comparison of CBT to medication. In the CAMS trial comparing CBT to medication, for example, the patient sample was 78% Caucasian and 75% middle or upper class [ 10 ]. A recent AHRQ (Agency for Healthcare Research and Quality) review identified inadequate ethnic and racial diversity as a prominent limitation of prior treatment studies of pediatric anxiety disorders and stressed that characterizing treatment response in underserved ethnic minorities, and identifying modifiers of treatment response, constitute the most pressing needs for future research [ 20 ].

Consensus panels formulating treatment guidelines for pediatric anxiety disorders have relied on clinical experience, theory, and findings from observational and controlled trials of CBT or medication conducted in isolation for other purposes. They generally agree that CBT and either SSRI or SNRI pharmacotherapies have the strongest evidence for efficacy [ 20 ]. They often recommend CBT as the first-line treatment, but without evidence that it yields better patient outcomes than beginning with medication [ 44 – 46 ]. Many guidelines recommend combining psychological and medication therapy after an unsatisfactory initial treatment response, but without evidence that combined therapy is better than continuing or intensifying the initial treatment. Very few studies have compared combined therapy to either treatment alone [ 47 – 49 ]. In the CAMS study, combined CBT + sertraline improved clinical response more than either modality alone [ 49 ], but primarily in those with severe anxiety [ 50 ], and not in long-term follow-up [ 29 ].

To address these gaps in evidence for development of treatment guidelines for pediatric anxiety disorders, we will conduct a Sequential Multiple Assignment Randomized Trial (SMART) [ 51 , 52 ], that will develop and test an Adaptive Intervention – a set of decision rules for adapting treatment according to a patient’s individual clinical response – for the initial selection of treatment modalities (CBT or SSRI) and their subsequent sequencing, combination, and maintenance in treating pediatric anxiety. These rules will be based on individual patient characteristics -- particularly the patient’s response to initial and subsequent treatment, but also demographic and clinical characteristics -- that optimize treatment response. The SMART design involves randomization of participants at least once, and often more than once, sequentially over the course of the trial. Randomization occurs at critical decision points depending on the patient’s clinical response and is used to provide valid causal inference about the effects of differing interventions that will inform treatment decisions [ 53 , 54 ]. This design provides a rigorous framework in which to develop evidence-based treatment algorithms. SMART designs make no a priori assumptions about the existence or form of delayed treatment effects [ 51 , 52 ], making them ideal for developing an adaptive intervention that optimizes outcomes based on patient treatment history [ 51 – 54 ]. The SMART design described here – the first of its kind of which we are aware -- will focus on identifying treatment sequences that optimize patient centered outcomes for pediatric anxiety disorders in ethnically diverse patients from underserved low- and middle-income households.

Conceptual framework of the SMART design

The conceptual framework for this study is founded on the observation that patients differ in their responses to treatment, presumably due to individual variability in the psychological, biological, cultural, and psycho-physiological factors that shape adaptive and maladaptive anxiety responses to life experiences [ 55 ]. Ample evidence documents dysfunction across the neural, cognitive, affective, and behavioral components of anxiety disorders. Less is known about the most effective ways to modify that dysfunction through treatment. Evidence suggests that CBT or medication can modify the cascade of responses to real or perceived threat – CBT by altering behavioral avoidance and information processing related to threat detection and coping [ 56 , 57 ], and medication by altering neurotransmitter levels that affect the function and structure of brain circuits [ 58 – 60 ]. By experimentally controlling the sequencing of treatment modalities, we aim to advance knowledge about the relative merits of targeting the cognitive and behavioral components of this cascade with CBT, or its biological components more directly with medication.

Individual differences in treatment response likely derive from patient-specific biological, sociological, psychological, historical, and contextual determinants of responses to threat [ 61 ]. For a treatment to be most effective, it should be tailored to patient characteristics that influence those determinants. Tailoring should be adapted dynamically, repeatedly over time according to that patient’s individual response to treatment, to produce an adaptive intervention that informs how and when to intervene, and how and when to modify the intervention to optimize long-term outcomes. An adaptive intervention has 4 components: (1) decision stages, each beginning with a decision concerning treatment; each decision stage incorporates (2) treatment options, (3) tailoring variables, and (4) an if/then decision rule. The decision stages, treatment options, tailoring variables, and implementation of decision rules are all part of the intervention itself [ 62 ]. The overarching aim and function of a SMART design is to construct a high-quality adaptive intervention based on empirical data. A SMART design builds an adaptive intervention that dynamically tailors treatment to individual patient characteristics and evolving therapeutic response to optimize patient outcomes, a process that has been likened conceptually and operationally to the development of a feedback control system for dynamic systems that regulates and optimizes a time-varying study outcome variable [ 63 ]. A SMART design offers a considerable practical advantage over more traditional trial designs, in that it addresses simultaneously several research questions and study hypotheses relevant to each decision stage, as well as their interaction representing the sequencing of interventions [ 64 ].

Patient-centeredness within the SMART design

Adaptive interventions resemble clinical practice, in that different interventions are assigned to different individuals and within individuals over time, with the intervention(s) ideally varying in response to patient needs and patient response [ 62 ]. Clinicians, however, often lack the empirical evidence needed to guide such intervention. In pediatric anxiety treatment, limited evidence exists to inform the decision about which treatment to begin with based on a particular patient’s characteristics and context, and what the clinician should do if the chosen treatment is not working. This absence of empirical data for clinical decision-making is a challenge that may be addressed in part via the multistage randomization of a SMART design to inform the development of optimal adaptive interventions, providing an empirical basis for clinical decision-making. The relevance to pediatric anxiety is clear: though substantial evidence supports the effectiveness of both SSRI’s and CBT in treating pediatric anxiety, neither treatment works well for all youth, and much more evidence is needed to guide clinician judgments regarding which treatment to use, with which patients, and at which points in treatment. The present study will address this gap, providing evidence to inform treatment-personalizing decisions that are required of virtually every clinician treating pediatric anxiety. Findings on the association of baseline individual tailoring and environmental variables with later clinical outcomes can inform the use of individual patient information for initial treatment assignment. Findings of the sequential randomization aspect of the SMART design during treatment can inform clinician decisions about treatment sequencing based on the individual patient’s response to the intervention currently in use, further supporting personalized treatment [ 53 , 54 ]. Empirical evidence contributing to individualized treatment regimens based on tailoring variables, treatment history, and current response status will have the added advantage of supporting shared decision-making by the child and parents with an empirically-informed clinician [ 65 , 66 ], ideally improving patient satisfaction, treatment motivation [ 67 ], and clinical outcomes [ 68 , 69 ]. This study will also explore the effects of treatment on long-term patient and family-centered outcomes, such as symptom recurrence, subjective distress and well-being, social relationships, family and school functioning, and potential adverse treatment effects [ 20 , 70 ]. The findings of this study will therefore inform decisions among patients, families, clinicians, and healthcare leaders about improvements that can be expected in the short- and long-term when treating pediatric anxiety disorders with CBT, medication, or their combination in real-world settings, providing information needed when making critically important treatment choices for individual patients, with a much-needed emphasis on children from underserved and minority populations.

Potential for study findings to be adopted into clinical practice and to improve delivery of care

Because adaptive approaches approximate intervention sequences used in clinical practice, they can be used to develop, test, and refine algorithms for clinical decision-making and inform the development and validation of practice guidelines [ 64 ]. The adaptive intervention that a SMART design produces can be incorporated into clinical practice more naturally and seamlessly than the findings of a fixed-intervention study [ 53 , 54 , 62 ].

Hypothesized causal pathways and their theoretical basis in the treatment of pediatric anxiety disorders

Our study population will be predominantly economically disadvantaged, racial and ethnic minority youth, 8–17 years of age. The causal pathways from treatment assignment to clinical outcomes involve factors that are both common and specific to our two interventions ( Coping Cat CBT and SSRI therapy) ( Fig.  1 ) . Coping Cat CBT, like most psychological or behavioral therapies (henceforth “psychotherapies”), is a complex intervention comprising many elements. Some of those elements are present, to varying degrees, in most or all psychotherapies or even any clinical encounter, including an encounter when prescribing medicine [ 71 , 72 ]. Some elements are considered unique and specific to CBT [ 73 – 75 ]. We propose to study both the common elements of the clinical encounter as well as elements specific to CBT in this SMART study. Of the many potential factors common to all clinical encounters, two are explicitly components of Coping Cat CBT, “alliance-building” and “reward”, and they are at least implicitly components of the psychopharmacological treatments as well ( Table  1 ) . Two factors are generally considered unique to CBT, “exposure” and “coping efficacy”, or skill building [ 77 , 78 ]. All four are thought to be important in improving patient outcomes: they are considered to be the “active ingredients” of the complex intervention, and to evolve over the course of the therapy. The extent to which these elements or functions are present in the therapy and increase over time is the extent to which patients are thought to improve. Therefore, we will measure these elements over time in this SMART study and assess their influence on patient outcomes.

An external file that holds a picture, illustration, etc.
Object name is 12888_2021_3314_Fig1_HTML.jpg

Postulated Causal Pathways CBT-Specific Factors include exposure and development of coping skills. Common factors include the strength of the therapeutic alliance, clinician relatedness, and rewards associated with treatment. Medication-Specific Factors include direct effects of fluoxetine on brain circuits. Functional Outcomes include improved social relationships, better family and academic functioning, and improved self-esteem, which also function as natural reinforcers, or rewards. Vertical arrows from the Contextualizing Factors represent moderation of the association (shown as an oblique arrow) of a treatment modality with the indicated set of mediators. These contextual factors include Illness Factors such as comorbid illnesses, baseline anxiety severity, and family history of anxiety, as well as Cultural Factors, which include ethnicity and SES

Core Functions and Forms [ 76 ] for Coping Cat

Common factors across both study interventions [ 71 , 72 ]

These include the quality of the therapeutic alliance, clinician relatedness (especially their degree of empathy, while working with the patients in a collaborative and developmentally appropriate way [ 79 ]), and evaluation of response and delivery of rewards associated with improved functioning. Therapeutic alliance provides a coherent narrative and conceptual framework through which to understand and address the patient’s suffering within a mutually shared cultural context, thereby providing an explanation for how the patient developed impairing anxiety and what can be done about it. It provides expectancy and hope for change and fosters the fortitude to confront the problems through treatment [ 80 – 83 ]. Prior studies have shown that a better therapeutic alliance precedes and predicts better outcomes, and better outcomes precede and predict an improved therapeutic alliance [ 71 , 73 , 84 ]. Therapeutic alliance will be measured using the Outcome Rating Scale [ 85 ] ( Table  2 ) . Clinician relatedness includes the personal relationship between therapist and patient, and the extent to which each is genuine with the other. It provides the patient with a connection to a caring and empathic person, which is assumed to be therapeutic in itself, especially for patients who have impoverished social relationships [ 113 – 115 ]. It will be measured using the Session Rating Scale [ 85 , 109 ] ( Table ​ Table2 2 ) . Rewards for improved anxiety outcomes include natural reinforcers, such as improved social relationships, better family and academic functioning, and the improved self-esteem they bring. These rewards in turn strengthen the therapeutic alliance and patient engagement in practices that presumably produced the therapeutic change. They will be measured using patient ratings of life satisfaction [ 92 ] and well-being [ 91 ], social functioning [ 92 ], and school functioning ( Table ​ Table2 2 ) .

Patient-Centered Outcomes

C=Child-report; P=Parent-report; Cl = Clinician-report

B=Baseline; 6 = Week 6; 12 = Week 12; 18 = Week 18; 24 = Week 24; Q = Quarterly Follow-up; M = Monthly, every 4 weeks during the 24-week trial

Child Total Time: 93 min

Parent Total Time: 78 min

Clinician Total time: 10 min

Factors specific to Coping Cat CBT

These include tolerance of exposure to anxiety-provoking stimuli and development of coping efficacy [ 116 ] ( Table ​ Table1 1 ) . Tolerance of exposure , with inhibition of an associated avoidance response, is considered the key element in the cognitive behavioral therapy of anxiety disorders [ 77 , 78 ]. Exposure is thought to combat fear and avoidance via one or both of two learning processes, habituation and extinction learning (also termed inhibitory learning). Habituation is a form of non-associative learning in which repeated exposure to the feared stimulus or situation produces a transitory weakening of fear responses, with a higher frequency of exposure producing greater habituation, and a greater duration between exposures producing a greater return to pre-exposure levels of fear response [ 117 ]. Habituation also produces a steady decline in neural response to the feared stimulus. Extinction learning refers to the reduction or extinction of the fear response following repeated exposure to specific fear-eliciting situations in the absence of the aversive consequences with which it was previously paired. This exposure generates new learning of safety-based associations that inhibit former fear response associations to the feared stimulus [ 77 , 78 , 118 ], thereby either extinguishing the fear response or enabling a level of fear tolerance that reduces anxious distress. Extinction activates neural pathways that inhibit or modulate emotional responses and avoidance of the feared stimulus [ 57 ].

Exposure also promotes the development of cognitive, emotion regulation, and behavioral skills for coping more effectively with fear, termed coping efficacy.  Coping efficacy diminishes the conditioned fear response and provides the opportunity for new extinction learning, with an attendant inhibition of maladaptive avoidance. Practicing these skills in different fear-inducing contexts supports generalization of inhibitory learning and stronger extinction of fear responses [ 78 ]. Of note, both time spent in exposure tasks, and level of difficulty of the exposures tolerated, predicted better outcomes to Coping Cat within the CAMS sample [ 119 ]. These findings directly informed our decision to have additional exposure practice and mastery be the focus of our Phase 2 CBT optimization protocol. In addition, efficacy measures in the CAMS trial outperformed specific measures of cognitive change in mediational models on Coping Cat effects, suggesting that development of coping efficacy may mediate CBT outcomes [ 120 ].

We will obtain after each CBT session a therapist-rated measure of 3 key facets of exposure practice (quantity, difficulty level, and mastery of exposure tasks) [ 119 ], which we will use to craft an index of tolerance to exposure – a CBT-specific factor in effecting patient outcomes. As a complement to this in-session measure of tolerance of exposure, we will also obtain at baseline and then every fourth session the parent version of the 8-item Child Anxiety Avoidance Scales [ 110 ], a measure of the evolving real-world CBT skill set outside of session that assesses the reduction of anxious avoidance when presented with threatening stimuli. Finally, we will use our measure of Self-efficacy from the NIH toolbox measure of Life-Satistfaction as a proxy for coping efficacy to test this CBT-specfic pathway. We expect that measures of each of the common factors will independently and positively associate with outcomes in our study for both Coping Cat and medication therapies [ 72 , 82 , 121 ], and we expect that the greater therapeutic alliance anticipated with CBT compared with medication therapy will partially mediate differences in clinical outcomes across the two treatment modalities. We also expect that measures of the CBT specific factors -- exposure tolerance and coping efficacy -- will positively associate with clinical outcomes in response to Coping Cat CBT.

Factors specific to fluoxetine administration

We do not yet know precisely how medication, including fluoxetine and other SSRIs, improve symptoms in pediatric anxiety disorder. It is highly probable, however, that those mechanisms are initiated by the effects that medication has on altering neurotransmitter levels, which in turn affect the function and structure of brain circuits [ 58 – 60 ]. SSRIs inhibit the presynaptic reuptake of serotonin after its release into the synaptic cleft, which in turn, over time, desensitizes HT1a serotonin receptors on or near the cell body of the presynaptic neuron [ 122 ]. Desensitization of these presynaptic serotonin receptors then increases impulse flow in the presynaptic neuron and ultimately increases serotonin concentrations in the synaptic cleft. Serotonin HT1b receptors on the presynaptic terminal then also desensitize, further increasing presynaptic transmission and increasing the serotonin available to stimulate postsynaptic neurons. Precisely how an increase in postsynaptic serotonergic signaling improves anxiety symptoms is unknown, but presumably therapeutic effects pertain to increased serotonergic tone in the multiple and widely distributed neural systems that serotonin influences, including arousal pathways from the midbrain raphe to prefrontal cortex, or from the midbrain to basal ganglia, mesolimbic cortex and hippocampus, or hypothalamus [ 122 ]. Most brain imaging studies of the effects of SSRI or SNRIs on brain structure and function have been poorly controlled and naturalistic. The few imaging studies that have been combined with an RCT design to provide stronger causal inference of the effects of these medications have shown that they normalize pre-existing abnormalities in brain structure [ 59 ], function [ 60 ], and metabolism [ 123 ]. We will not be able to measure these features in all study participants, though we plan to offer MRI scanning immediately before starting treatment, as well as at weeks 12 and 24, to any willing participants within a PCORI- and IRB-approved add-on study to our SMART design. We anticipate that fluoxetine will normalize pre-existing differences from healthy controls in measures of brain structure, function, and metabolism in youth with anxiety disorders, as found in the prior RCT studies for medication effects [ 59 , 60 , 123 ], and that CBT-induced changes in brain structure, function, and metabolism in brain regions that subserve extinction and inhibitory learning (dorsal frontal, basal ganglia, and ventromedial prefrontal cortex) will uniquely associate positively with improvements in patient outcomes [ 57 ].

Contextualizing factors

Also termed patient “tailoring variables”, these factors modify treatment response. Candidates for tailoring variables include baseline individual, family, and context characteristics, some of which relate directly to child anxiety and its clinical portrait: ethnicity [ 30 ], SES [ 32 ], past treatment response [ 29 , 31 ], and family history of anxiety; and variables that are potentially modifiable, such as overall symptom severity [ 29 , 31 ], functional impairment [ 29 , 31 ], the nature and severity of comorbid illnesses, treatment fidelity and adherence, and treatment setting (community or university; primary pediatric or specialty mental health clinic).

We theorize that demographic characteristics such as ethnicity [ 30 ] and SES [ 32 ] will function as cultural factors that will influence the nonspecific factors of therapeutic alliance, clinician relatedness, and treatment adherence. Data from the CAMS trial support these hypotheses. In CAMS, African-American youths attended fewer CBT and medication management sessions, and they were rated by therapists as less involved and compliant with treatment and, possibly as a consequence, as showing a lower level of mastery of CBT concepts [ 41 ]. Controlling for these process factors and SES eliminated racial differences in outcome. Similarly, patient nonadherence (poor attendance, low homework completion, poor compliance in session) was associated with number of parents present in the home (with the best outcomes for two-parent families), although indices of nonadherence varied in their power to predict clinical outcomes [ 124 ]. In contrast, we expect the modifying effects of illness-related factors , such as depression and other comorbid illnesses, baseline anxiety severity, and family history of anxiety, will operate through their impact on the Coping Cat -specific factors, exposure and development of coping skills [ 29 , 31 , 119 ]. Table ​ Table2 2 specifies how we will measure each of these contextualizing constructs.

Specific aims

An external file that holds a picture, illustration, etc.
Object name is 12888_2021_3314_Fig2_HTML.jpg

Schematic of Study Design

(a) We will assess whether beginning with Cognitive Behavioral Therapy (CBT) or fluoxetine medication is more effective in improving youth-rated anxiety symptoms over the 24-week intervention (“Main Effect 1”)

(b) If the initial intervention fails to induce clinical remission by week 12, we will assess whether optimizing the initial treatment modality alone, or adding the other modality while optimizing the first, yields better symptom improvement by week 24 (“Main Effect 2”)

(c) We will assess whether one sequence of treatment modalities – i.e., CBT➔CBT; CBT➔CBT + med; med➔med; med➔med + CBT -- is significantly better or worse than predicted from the two main effects.

(d) We will assess the stability of treatment response for ≥12 months following completion of the 24-week trial.

  • We will explore the moderating effects of patient characteristics on Main Effects 1 and 2, and on their interaction, to identify tailoring variables that will support personalized interventions for selection of initial and subsequent sequencing of treatment modalities for pediatric anxiety disorders.
  • We will explore the differential treatment effects of our interventions on other patient centered outcomes, including: well-being; life-satisfaction; self-efficacy; family, social, and school functioning; sleep; emotion-regulation; coping with change; comorbid psychiatric symptoms; and adverse treatment effects.

Hypotheses for Aim 1

(1a) Anxiety symptoms will improve more in children who begin treatment with fluoxetine than in those who begin with CBT. We base this hypothesis on several considerations. First, medication in the CAMS study exhibited a small, non-significant advantage over CBT on ratings from the Pediatric Anxiety Rating Scale [ 49 ]. Ethnic minorities also demonstrated poorer adherence to CBT in the CAMS trial [ 41 , 124 ], and these youth will be strongly represented in our study population. In addition, unlike CAMS and most prior treatment studies for pediatric anxiety, we are not excluding comorbid depression, which can exacerbate anxiety [ 125 ]. Because fluoxetine is helpful in treating both pediatric anxiety and depression, we anticipate added benefits to fluoxetine in treating anxiety symptoms via its effects on comorbid depression; (1b) Based on the greater response to combination therapy than to monotherapies in the CAMS study [ 49 ], we hypothesize that in children for whom initial treatment fails to produce remission by week 12, anxiety symptoms will improve more when the alternative treatment intervention is added to optimization of the initial intervention, compared to optimizing the initial intervention alone.

Study design

This will be a single-blind [ 126 , 127 ] SMART design [ 51 , 52 ] of 24-week treatment duration that will employ two sequential levels of randomization, one in each of two 12-week stages of the study ( Fig. ​ Fig.2 2 ) .

404 children ages 8–17 who have an anxiety disorder will be randomized 1:1 to receive 12 weeks of either the medication fluoxetine in upward-titrated dosages or weekly CBT implemented using the Coping Cat CBT model [ 128 ].

All participants who do not remit in Stage 1 will be randomized 1:1 to either (1) optimization of their Stage 1 treatment, or (2) optimization of their Stage 1 treatment and addition of the other treatment modality (Fig. ​ (Fig.2). 2 ). Study assessments will be obtained at baseline, week 12, and week 24. A small subset of measures (SCARED-41, and the Pediatric Side Effect Questionnaire) will also be obtained at weeks 6 and 18. Participants who remit during the first 12 weeks of treatment will continue maintenance-level therapy with the single-modality treatment received in Stage 1. If a participant who achieved clinical remission during the first 12 weeks of treatment relapses at the week 18 study assessment (defined as having a SCARED total score of ≥25), the participant will be referred to the treating clinician to restart or slightly intensify the previous treatment assigned in Stage 1 of the study. Following conclusion of the 24-week trial, we will follow and obtain all the same study assessments in all participants quarterly for at least 12 months [ 44 , 126 , 127 ].

Our criteria for remission during the trial will include (a) a youth SCARED-41 score less than diagnostic threshold for any single anxiety disorder, together with (b) a total youth SCARED-41 score < 10, and (c) a score of ≤8 on the CAIS (Child Anxiety Impact Scale). The CAIS score predicted remission in the CAMS trial [ 87 , 88 ], and the CAIS was sensitive to change over an 8-week RCT [ 89 ]. These remission criteria are intended to represent few residual anxiety symptoms together with minimal functional impairment. The stringency of these remission criteria is based on the reviewed prior studies indicating the importance of achieving diagnostic remission, with few residual symptoms and minimal functional impairment, in optimizing long-term clinical outcomes. All participants who do not meet these remission criteria will be randomized (1:1) for Stage 2 treatment, either to (1) optimization of their Stage 1 treatment, or (2) optimization of their Stage 1 treatment and addition of the other treatment modality. Our use of patient-report measures to define remission at Week 12 will allow us to quickly evaluate clinical status and move to either maintenance treatment or second-stage randomization with minimum disruption in care.

The Stage 2 interventions mirror current best clinical practice in several ways. First, no viable treatment option beyond CBT, SSRI, or their combination exists for partial or non-responders. Second, Coping Cat often requires more than the manualized weeks of treatment, particularly in lower income and minority populations, who may face additional challenges with regard to missed appointments and completion of homework assignments [ 40 , 41 ]. Coping Cat also has 2 distinct phases of implementation, with phase 1 comprising psychoeducation, coping skills development, and initial introduction to exposure, and phase 2 intensifying treatment through more challenging and repeated exposure activities, which map well onto our 2 SMART stages. Third, optimization of medication dosing often is not complete within the first 12 weeks, particularly for younger participants, and the effects of any given dose may not fully manifest for 12 weeks or more [ 21 , 22 , 129 ]; thus, the effects of Stage 1 dose increases may not manifest until Stage 2. As SMART designs make no a priori assumptions about the existence or form of these delayed treatment effects [ 51 , 52 ], they are ideal for developing an adaptive intervention that optimizes patient outcomes based on treatment in Stage 1 [ 51 – 54 ].

Comparators

Our 2-stage randomization crosses two Stage 1 interventions with two Stage 2 interventions, creating a factorial structure ( Fig.  3 ) .

An external file that holds a picture, illustration, etc.
Object name is 12888_2021_3314_Fig3_HTML.jpg

Factorial Structure

Testing Effect 1 compares the effectiveness of CBT with fluoxetine medication in improving patient-centered outcomes over the 24-week trial. It answers the question that all patients, parents, and clinicians ask when beginning treatment for an anxiety disorder: “Is it better to begin with CBT or medication?”

Testing Effect 2 compares 2 interventions in their effects on patient centered outcomes by week 24 in patients for whom the Stage 1 intervention fails to induce clinical remission by week 12: (a) optimizing the initial treatment modality alone; and (b) adding the other modality to optimization of the first. It answers the question that arises soon thereafter in the treatment of most patients: “If clinical response to the initial treatment is less than ideal, is it better to continue and optimize that initial treatment, or to add the other treatment modality to the first?”

The interaction effect assesses whether one of the 4 treatment sequences – CBT➔optimized CBT; CBT➔optimized CBT + medication; medication➔optimized medication; medication➔ optimized medication+CBT – is significantly better or worse than predicted from the two main effects alone. It should be noted that these interaction effects are not the same interactions that might be addressed in a classical factorial trial design because the treatments and groups are not concurrent, nor are the same conditions present for combining groups [ 130 ]. Nevertheless, testing this effect will address another question that arises during clinical treatment: “Which sequence of treatments will produce the best response – fully optimized CBT alone; CBT with the addition of medication; fully optimized medication alone; or optimized medication with the addition of CBT?” It also answers the question of whether the effects of combined treatment are additive or multiplicative relative to the effects of monotherapy.

Study population and setting

Los Angeles County (LAC) is one of the most racially, ethnically, and socioeconomically diverse regions in the world [ 131 ]. Our study population will be recruited from our vast LAC clinical network serving predominantly underserved, impoverished, and ethnic minorities. Locating these recruitment and treatment sites in LAC provides great efficiency in study infrastructure while simultaneously yielding a remarkable diversity of settings including primary pediatrics care (CHLA Care Network, AltaMed sites), a private health care plan (Kaiser site), a free-standing children’s hospital (CHLA site), and community mental health clinics serving primarily youth with Medi-Cal insurance, the federal Medicaid program in California (Hathaway-Sycamore, UCEDD, LAC+USC, Children’s Bureau of Southern California sites) or commercial insurance (LifeStance Health California site). The total number of patients seen in our age range across these 9 sites is > 300,761 annually (601,522 over 2 years of recruitment), and 2.9% of them already carry a diagnosis of anxiety disorder in their electronic medical record. We would need to recruit only 0.49% of all patients already diagnosed with an anxiety disorder across these sites to meet our target total of 404. We expect several times more than this number of patients to be identified through systematic screening efforts at each site, further enhancing the feasibility of recruiting our target number.

All clinical care components of the study, including CBT and medication management, will be paid through the standard mechanisms at each clinic. At all participating sites, with the exception of Kaiser, LifeStance Health, and CHLA’s care network facilities, Medi-Cal pays for services without patient co-pays. Even these exceptional facilities have ~ 20% of their patients with Medi-Cal. Those with private insurance have small co-pays that families can usually afford.

Study participants

We will aim for a sample that represents best practice in high quality clinical care by including most comorbidities. We will not exclude based on past diagnoses or treatment, in part because current diagnosis and treatment will be the focus of this study, and in part because it will not be possible to ascertain the validity of past diagnoses or quality of past treatments reported by patients or parents. Not excluding on these past variables will also enhance the generalizability of this study’s findings.

Inclusion criteria

  • Patient at one of our clinical study sites
  • Age 8–17 inclusive (this age range allows the same CBT intervention and outcome measures for all participants),
  • SCARED-5 screening score ≥ 3 (see “Screening” below) and score ≥ 25 on the SCARED-41,
  • A diagnosed anxiety disorder (generalized anxiety, separation anxiety, social anxiety, panic disorder) on the clinician-administered Schedule for Affective Disorders and Schizophrenia for School-Aged Children, Computerized version (K-SADS-COMP)
  • CAIS (Child Anxiety Impact Scale) > 8 (representing at least moderately severe illness).
  • We will include patients and at least one parent/caregiver who are fluent in either English or Spanish to ensure accurate assessment, as most standardized measures are normed only in those languages, and sites have limited capacity to provide translation services in other languages. Trial staff and clinicians will be fluent in both languages, and all selected outcome measures will have been previously validated in both languages. Based on comprehensive survey data, we estimate that only 0.3% of families at our sites will not have one parent/caregiver who speaks either English or Spanish.

Exclusion criteria

  • Patients currently receiving psychotherapy. Prior psychotherapy of any kind, including CBT, is not exclusionary.
  • Patients currently receiving an SSRI, SNRI, or benzodiazepine. Prior use of any of these medications is not exclusionary.
  • Patients with a severe neurological disorder or unstable medical condition, as determined by medical chart and medical history review by the site director and Principal Investigator
  • Females who are pregnant or sexually active but not using an effective method of birth control (potential adverse fetal effects of medication [ 132 ])
  • Patients who have taken Monoamine Oxidase Inhibitors (MAOIs), pimozide, thioridazine, olanzapine, tricyclic antidepressants (TCAs), antipsychotics such as haloperidol and clozapine, anticonvulsants such as phenytoin and carbamazepine within 2 weeks prior to starting the study, as these can interfere with metabolism of fluoxetine
  • 3 AND “access to crisis level support is unavailable”, OR
  • 4 if “frequency, duration, and deterrent” all = 1 AND treatment in a specialty mental health clinic is not available, OR
  • 4 if “frequency, duration, or deterrents” are > 1, OR
  • To map onto the cognitive and socio-emotional demands of the CBT protocol, we will exclude youths who are likely to be functioning at a developmental level outside the minimum age for the treatment manual (age 8): namely, youths who are placed outside of a general education classroom for > 50% of the school day or require a one-on-one classroom aide to maintain placement in a general education class, or are performing academically below the 3rd grade level in reading and language arts.
  • OCD will be diagnosed using the clinician-administered KSADS-COMP.
  • PTSD will be assessed using the Lang Child Trauma Screen [ 134 , 135 ]. To optimize the representativeness of our sample, we desire a very low false positive rate for a screener – i.e., we want a screener cutoff with high specificity for diagnosing PTSD. ROC curves for this screener [ 134 , 135 ] indicate that by requiring a score of ≥10 on the parent report for child reaction to past trauma, we will miss diagnosing approximately 22% of true PTSD cases but we will not exclude any who are incorrectly diagnosed. Therefore, we will exclude youth for whom (a) either the parent OR the child reports a history of trauma on this screener, AND (b) the parent reaction score is ≥10
  • Youth currently in foster care. We have extensively explored with the CHLA IRB and our clinical sites the procedures required to enroll these youth in research treatment studies. This includes permission of the court (both the patient’s court-appointed attorney and the judge who oversees the case), the Department of Children & Family Services, and biological parents (depending on the status of the case and who has medical rights), a process that requires institutional lawyers and that can take many months, requiring an extraordinary amount of time and effort on the part of the study team. Potential treatment issues that include the need to obtain additional court approvals when randomized to the medication arm will also likely create a delay in care and systematically affect the medication arms of our trial. Finally, participation requires the de facto permission of the foster parents to ensure the youth will attend clinical appointments. Although we would like to include youth in foster care, it simply is not feasible.

Non-randomized, treatment-as-usual (TAU) group

It is possible that medication and CBT will produce similar therapeutic responses in our patient population. We would like to be able to estimate the effect of each treatment and each treatment sequence in our SMART design against usual treatment in real-world settings. Therefore, we will include a non-randomized TAU group consisting of patients who satisfy all study eligibility criteria, pass the 3-level screen, an decline participation in the randomized trial, but who consent to participate in a truncated study measurement sequence. In addition to the eligibility measures for these patients, we will acquire a small subset of measures (SCARED-41, CAIS, CBCL, treatment expectations, ADIS Supplemental Service Form) at the same time points as in the randomized group – at baseline and weeks 6, 12, 18, and 24, as well as the quarterly follow-up assessments thereafter.

We anticipate screening 80% of newly registered and current patients (ages 8–17) for anxiety disorders at our 9 study sites. We will employ a 3-stage screening process. In Stage 1 Screening, we will administer the 5-item version of the parent and child SCARED [ 86 , 136 , 137 ] to patients presenting to any of our outpatient services at all our performance sites. A cut-off score ≥ 3 discriminates “anxiety” from “non-anxiety” patients with 74% sensitivity and 73% specificity [ 86 ] and has demonstrated utility to screen youth at risk for anxiety across primary care settings [ 138 ]. We will regard a score of ≥3 on either the parent or child SCARED-5 as a positive screen. The screener will request permission from the parent to allow the clinic to share the parent contact information with our study team if the child screens positive for anxiety. In Stage 2 Screening, youth who screen positive on either the parent or child 5-item SCARED will complete the 41-item SCARED. Those who score ≥ 25 [ 139 , 140 ] on the total score for the parent or child SCARED will undergo further characterization for study eligibility and possible recruitment. In Stage 3 Screening, youth and their parents who pass Stage 2 screening will be administered several additional assessments to complete determination of study eligibility. These will include the clinician-administered KSADS-COMP (for diagnoses of anxiety disorders and OCD), Child Trauma Screen (for PTSD), Columbia-Suicide Severity Rating Scale, CAIS (for functional impairment), and a patient profile (for medical illnesses, treatment, and demographic information).

Recruitment

For patients satisfying all eligibility criteria, a study coordinator will contact the parent and patient, describe the study (including randomization requirements), address questions, and, if the parent and patient agree to participate, schedule a baseline assessment and initial clinical visit. Formal written consent and assent will be obtained in person in a private area or online, over the telephone, or by mail at the baseline assessment. When consenting in person is not an option, we will consent over the phone or video conference.

We will attend carefully to elements of study design and execution that enhance recruitment. Participants are often concerned about being randomized to a less effective treatment [ 141 , 142 ], but here the 2 treatment arms will be in true clinical equipoise [ 143 ]. Patients will not face the possibility of being randomized to an ineffective or minimally effective treatment. This was important to the parents in our community engagement activities, who enthusiastically supported our proposed SMART design. Outcomes will be patient-centered [ 141 ], in many cases the study will be presented by their primary clinician [ 141 , 144 , 145 ], treatment expectations and preferences will be explored [ 146 ], informed consent will be presented simply, confidently [ 147 ], and in lay language [ 141 , 142 , 144 , 145 ], most clinic/study visits will be scheduled after school or on weekends [ 144 , 148 ], study demands will be kept to a minimum [ 141 , 142 ], and patients and parents will not be blind to treatment assignment [ 141 , 142 , 145 ]. A large majority of participants will have Medi-Cal insurance, which does not require co-payments, so participation should not entail a financial burden [ 149 ]. We will employ the principles of QuinteT Recruitment Intervention methods, which comprise in-depth, real-time investigation of recruitment obstacles, participation, attendance, and motivation, followed by implementation of tailored strategies to address identified challenges across all treatment arms as the trial proceeds [ 143 , 150 ]. We will record demographics and screening measures of all screened patients to compare those who do and do not choose to participate, to assess the representativeness of our study sample. We estimate an attrition of 20% over the 24-week RCT, yielding 323 treatment completers. If attrition proves higher, we will increase numbers recruited to ensure we achieve at least that number of completers. Our patient numbers, clinician staffing, and research coordinator burden will readily support this increased recruitment.

These efforts will begin with informed consent. Developing a trusting personal relationship with the family is vitally important for study adherence and ongoing participation, and the treatment provided during the study will help to build those relationships. We will strive to assign a specific research assistant to each patient for the duration of participation. If study appointments are missed, the family will be contacted and the visit promptly rescheduled. If unable to reach them during regular business hours, we will attempt contact at varying days and hours. Cards and small presents will be provided on holidays and birthdays. We will collect comprehensive tracking and location information, including home and work addresses, email addresses, social media contacts, Medicaid and social security numbers of parents and participants, and phone numbers of relatives and others with close relationships with the family. We will conduct quick monthly check-ins with all participants (e.g., text message, social media direct message) to foster our relationship and ensure contact information is current; participants will receive $5 for each check-in completed. When not found at their last-known residence, staff will review change-of-address reports and contact next-of-kin or participant-identified friends. We will keep a record of clinics attended by our participants and, as necessary and with permission, use the clinics to maintain contact. We will compensate participants appropriately in the randomized trial for each research assessment, escalating compensation with time after the lengthier baseline visit to encourage continuing participation ($25 for baseline, $5 for week 6, $20 for week 12, $5 for week 18, $65 for week 24 assessments, $5.00 for each monthly check-in, and $20 for each of the quarterly follow-ups after the 24-week trial). Participants in the non-randomized (TAU) cohort will received $5 for their assessments at each time point.

Community and stakeholder engagement

One of our core values is community engagement. We are committed to ensuring that patients, caregivers, clinicians, and other healthcare stakeholders play a meaningful role throughout the entire research process. We see our patients and other healthcare stakeholders as equitable partners—not simply as research subjects—and we plan to leverage their lived experiences and expertise to ensure this research is patient centered, relevant, and ultimately useful. Their values, priorities, and preferences are of paramount importance, and so they have been full partners in developing this study design and in selecting its measures.

Our community partners

This research will be conducted in Los Angeles County, which is more populous than 42 states and more racially/ethnically, linguistically, culturally, religiously, and socioeconomically diverse than any other city or county in the world. We have deep expertise conducting culturally tailored community engagement that fully embraces this diversity, and in conducting research in partnership with underserved, under-represented, and high disparity populations. We define patient partners as including teenagers (≥ age 12 years); parents, guardians, caregivers; parent advocacy/support groups; and local, state, and national organizations and advocacy groups (e.g., National Alliance on Mental Health, with a California chapter and local chapters throughout Los Angeles). We define stakeholder partners as including pediatricians and community primary care physicians, hospitals, health/mental health systems, teachers and educators; purchasers, and policy makers (including local health-related foundations).

Leveraging our Southern California Clinical and Translational Science Institute (SC CTSI)

We will leverage the resources, services, and infrastructure available through our SC CTSI and its Community Engagement (CE) Program. The CTSI was created to engage vulnerable communities in clinical and translational research; facilitate academic-community partnerships to ensure patient and community engagement in research; and develop, evaluate, and disseminate novel approaches to engaging diverse populations in research. The SC CTSI will offer a range of resources to this project, including access to community health workers who have deep roots and expertise working in Latino ( promotoras) , African American ( Cultural Brokers) , and Asian American communities.

Community engagement to inform study design and methods

We conducted extensive engagement activities with patients, parents, and caregivers, healthcare providers, and other stakeholder groups, to ensure our research incorporates their voice and reflects their interests in a meaningful way. These activities were conducted in both English and Spanish, in safe spaces (churches, schools, libraries, community health centers) located in East, South, and Central Los Angeles. A brief description of these engagement activities, and important feedback obtained during each, follows:

  • A. Community Listening Sessions. We hosted a well-attended Community Listening Session focused on pediatric anxiety, during which parents shared their experiences and discussed the challenges associated with raising a child or adolescent with an anxiety disorder. Parents told us how they would want to be involved in this study and offered considerable input as to how we might collectively disseminate research findings to ensure uptake by patients and other stakeholders.
  • B. Our Community/Our Health Los Angeles (OC/OH-LA). OC/OH-LA is a CTSI community engagement strategy to foster ongoing conversation between health researchers and community members. We hosted an OC/OH-LA session, which was attended by 45 parents, grandparents, and mental health community workers from South L.A. We provided an overview of pediatric anxiety and approaches to evidence-based treatment, followed by questions and discussion. Parents discussed the shame that youth with a mental health condition feel, the secrecy they keep, and how this shame/stigma creates a significant barrier to accessing treatment services. Parents also expressed their fears, misperceptions, and general lack of awareness of the safety, efficacy, and long-term effects of medications to treat anxiety. This feedback deeply informed our planned recruitment and retention efforts. Moreover, it suggests that our dissemination efforts targeted to consumers of care will need to address, head-on, both stigma and fear of medication if our goal to encourage uptake of the study findings is to be achieved.
  • C. Focus Groups with Parents and Teenagers to Identify Patient-Centered Outcomes. We conducted focus groups with parents/caregivers and, separately, teenagers living with anxiety, referred from mental health service organizations. Parents and their teenage children identified outcomes that concerned them most, as listed in “ Patient Centered Outcomes ” ( Table ​ Table2 2 ) . They also echoed the OC/OH-LA session, calling for outreach efforts to address stigma and medication concerns. This feedback drove decisions about study outcomes and the scales used to assess them.
  • D. Research 101. Another SC CTSI community engagement activity is a Research 101 training manual and curriculum entitled, “Engaging Communities in Research, the Fundamentals of Research,” which presents the fundamentals of research, the importance of increasing diversity in research, and potential myths and barriers to participation. We offered Research 101 to a group of parents and caretakers. In general, we heard that parents have many fears and misperceptions about research and the intentions of researchers. Parents also expressed concern about randomization as potentially “unfair” if treatment is withheld. Following Research 101, parents demonstrated increased knowledge about research, study methods, and designs. Parents also appreciated and understood how, in our proposed study, treatment will not be withheld but rather randomization will occur to evaluate the effectiveness of two gold-standard treatment modalities; they were highly supportive of this design, suggesting that we offer Research 101 to our Parent Advisory Council members (described below) and an abbreviated version of Research 101 to parents of our participants before beginning the study, to help them understand and feel more comfortable with their child’s participation. We believe this will also improve participant retention. We will include information about research methods in our dissemination materials (e.g., newsletters, brief summary reports) throughout the study and at its conclusion to encourage uptake of study findings.
  • E. Engaging Mental Health Providers & Mental Health Support and Advocacy Groups. We met with mental health support and advocacy organizations to solicit suggestions about study design and our approach to engagement. Participants included the Director of the County’s Department of Mental Health and the President of NAMI California. Other organizations that we have begun to engage and will continue to work with include: the American Academy of Child and Adolescent Psychiatry; National Federation of Families for Children’s Mental Health; Mental Health America; Anxiety and Depression Association of America; United Advocates for Children and Families; California Mental Health Advocates for Children and Youth; Los Angeles Unified School District; Pacific Clinics; Young Minds Advocacy; and our study’s partner sites. We will also work with 2 youth-focused organizations: (1) International Children’s Advisory Network (iCAN), a worldwide consortium of children’s advisory groups known as Kids Impacting Disease through Science (KIDS); and (2) Young Person’s Advisory Groups (YPAGS), youth groups working to provide a voice for children and families in research. These organizations will help us develop our own KIDS and YPAGS for this project. Also, we will partner with WeRiseLA; this organization seeks to use art and community-building to transform the mental health care system and to foster mental health and wellbeing as a civil right.

Community engagement activities

Based on the feedback obtained from parents, parent advocates, and mental health providers, we will employ the following structure and activities before, during, and after our study:

Advisory committees

We will establish 2 advisory groups: 1) a Parent Leadership Council (PLC), and 2) a Youth Leadership Council (YLC). Both will provide ongoing feedback and assistance to: refine study questions and research methods; assist with outreach; develop recruitment and retention protocols; refine assessment procedures and measures; troubleshoot challenges; review performance; and interpret and disseminate findings. Each group will meet bi-monthly and will be facilitated by the Community Engagement team. Both will report directly to the study PI; the PI and research team will attend these meetings to ensure members’ ongoing, meaningful input. Employing community-based participatory research principles relevant to partnership development, we will work continually to foster bi-directional collaborations and partnerships. We will ensure that our partners are included in all stages of the study, we will integrate knowledge and action that is beneficial to all partners, and we will promote a co-learning and an empowering process. 15 We chose to create 2 separate committees based on feedback received from parents and youth; each wanted a “safe space” in which to share their thoughts and opinions, without possible “judgment” from the other. We will provide each of these groups regular updates on study progress, report challenges, brainstorm solutions, share successes, discuss/interpret findings, and assist with dissemination. A standing meeting agenda will include capacity building (e.g. how to interpret a conceptual model, how to read a table of data) to create an environment of reciprocal relationships and co-learning; discussions about the best way to implement the study – how to engage, recruit, and retain patients/parents; how to message the study in a culturally relevant/tailored way to the community; culturally appropriate staffing for the project; ideas for co-learning opportunities; and ongoing review of recruitment techniques and retention methods. The Southern California Clinical and Translational Science Institute will offer an educational workshop, called Research 101 , to our two stakeholder advisory committees. Research 101 was developed for lay audiences to a) address myths and fears about research, b) increase scientific literacy about clinical research is conducted, and c) offer information about study participants’ rights. Advisory committees will meet remotely throughout the course of the study using a HIPAA-compliant communications platform (e.g., Zoom). The following provides a brief description of each advisory committee.

Parent Leadership Council (PLC) The PLC will be established so parents and caregivers can provide input to the research team. We will work with United Advocates for Children and Families Parent Leadership Institute to provide training and support to parent members, to help them further develop leadership and advocacy skills. An ongoing agenda item will be to review content and format for a quarterly project newsletter for participants and their families. Consistent with emerging best practices for conducting focus groups via teleconference during the pandemic [ 151 , 152 ], the PLC will include 8–10 parent representatives from local provider networks, parent support groups, parents from our performance sites, and other organizations. This Council will meet bimonthly.

Youth Leadership Council (YLC) The YLC will be established and included as equal partners in study design and implementation, based on feedback from CHLA’s Family Advisory Council and iCAN. We anticipate the YLC will comprise 8–10 adolescents (12–17 years old) with an anxiety disorder. It will meet bimonthly to provide insight from the youth/patient perspective into “best practices” informed by their experiences and challenges with mental health services, and to provide essential guidance on study outcomes and potential barriers to enrollment, recruitment, and retention. We will integrate the YLC with iCAN and WeRise LA. WeRiseLA will also play an important role in dissemination of our findings, as we will work with our YLC to develop art projects that convey their experiences. We will explore other Community-Based Participatory Research methods (e.g., Photovoice [ 153 , 154 ]) as vehicles for youth to express their experiences and perceptions as they inform our study. They will also help with interpretation and dissemination of study findings. We will convene the YLC bimonthly. Membership will include youth from all partner sites.

Resolving Conflict Conflicting opinions are inevitable in any sustained collaboration of diverse voices and perspectives, but when handled appropriately, it can have positive and desirable effects. We will employ community-based participatory research principles and tools to address conflict and differing opinions when they arise. We will have clear written rules for group interactions and decision-making. Advisory group members and the research team will be encouraged to recognize one another’s strengths and address any implicit academic or community stereotypes (e.g., “researchers only care about research”, “community members don’t have skills to bring to a research study”). Discussions will be facilitated by senior staff who have extensive experience conducting community-based participatory research with such advisory groups. We will identify an ombudsman from our standing SC CTSI’s Community Advisory Board to assist with conflict resolution, should intractable differences of opinions related to the study emerge.

Conduct of the trial

Outcome measures.

Active listening sessions with parents and youth with anxiety disorders conducted before study initiation discussed desired outcomes [ 155 , 156 ] and the time requirements for assessments (< 75 min). Selection of our primary and secondary outcome measures were driven by these voiced preferences. Additional selection criteria included: documentation of the conceptual and measurement model; evidence for reliability and validity; interpretability of scores; validity; and change sensitivity [ 157 ]. Each measure will be obtained at baseline, week 12, and week 24 during the 24-week trial and quarterly during the ≥12-month follow-up. A subset of measures will also be obtained at week 6 and week 18 ( Fig.  4 ) .

An external file that holds a picture, illustration, etc.
Object name is 12888_2021_3314_Fig4_HTML.jpg

Participant Timeline for Assessments The overall study duration will be 36 months. Trial duration, subsuming stages 1 and 2 each 12 weeks long, is 6 months (24 weeks). Recruitment will occur in months 1–24, and treatment will continue through month 30. Naturalistic follow-up will continue after the 24-week trial to assess the durability of study treatment effects and will range from 6 to 30 months, depending on when participants were recruited into the study. Months 36–43 be devoted to data analysis and draft publication of the manuscript, with submission for publication by month 50

Primary Outcome Measure will be the patient ratings of anxiety symptom severity on the Youth SCARED-41 (Screen for Child Anxiety Related Disorders) [ 86 , 136 ].

Secondary Outcome Measures will be the parent ratings of anxiety symptom severity on the Youth SCARED-41, and parent and youth ratings on the CAIS (Child Anxiety Impact Scale).

Outcome Measures for Exploratory Analyses are listed in Table ​ Table2 2 . Many of these measures as listed in the Table were selected from our extensive listening sessions with parents and youth. Parents voiced wanting their child to be able to sleep through the night, manage change without anger or panic attacks, interact positively with peers and teachers, and attend school more consistently and without emotional outbursts. Desired outcomes for teenagers included feeling comfortable around peers, feeling confident in the classroom, feeling less worried about things outside their control, and being better able to manage emotions.

Baseline diagnostic assessments

Baseline assessments will generally be performed remotely via HIPAA-compliant Zoom, but they will be performed in person when parents and youth prefer.

Training of assessors

We will train staff on administration of the clinician-administered KSADS-COMP-Parent and Child [ 158 ] following instructions in the administration manuals. The KSADS-COMP is a computer-based version of the KSADS that is available in both English and Spanish. All our remaining assessments are either self- or parent-reports, or highly structured interview questions, that require minimal training to administer.

Parent-completed background materials

The Patient/Subject Profile is a systematic review of participants’ medical, psychiatric, and treatment history, as well as family history of psychiatric illnesses. SES will be quantified using the Hollingshead Index of Social Status [ 159 ], augmented with more contemporary measures of material hardship and perceived social status [ 160 – 162 ].

Randomization

In each randomization within trial Stages 1 & 2 ( Fig.  5 ) , eligible and consenting/assenting participants will be randomized in a 1:1 allocation to the 2 treatment regimens. Randomization will be stratified by study site, age group [ 8 – 17 ], and baseline symptom severity (dichotomizing SCARED-41 scores as ≤33 or > 33, based on the median SCARED-41 score for participants having scores ≥25 in the CAMS [ 49 ] and LAMS [ 163 ] studies). Randomization will be further blocked, with a relatively small block size to ensure balanced randomization over the short term; block size will not be revealed to investigators or trial staff. The study statistician will develop and monitor fidelity to the randomization sequence. At the conclusion of week 12 of the Stage 1 intervention (medication vs CBT), anxiety symptoms and functional impairment will be reassessed. Participants who meet criteria for remission will continue maintenance-level therapy with the single-modality treatment received in Stage 1; non-remitting participants will complete Stage 2 randomization. The REDCap (Research Electronic Data Capture) randomization module will be used to randomize patients to study treatments [ 164 , 165 ]; the study statistician will develop the stratified blocked randomization sequences. The randomization sequence will not be viewable. Randomization capability will be limited to the lead research coordinator and study statistician; randomization will only be available following confirmation that informed consent has been completed and all trial inclusion and exclusion criteria are met. Treatment assignment will be communicated from the lead research coordinator to the study coordinator who is assigned to screen, enroll, and administer assessments to that participant.

An external file that holds a picture, illustration, etc.
Object name is 12888_2021_3314_Fig5_HTML.jpg

Schematic for Stage 2 Randomization At the end of Stage 1, participants who meet criteria for remission will continue in maintenance therapy with the same intervention as assigned in Stage 1. Those who do not meet criteria for remission will be randomized to either (1) optimization of the Stage 1 intervention they are already receiving, or (2) optimization of the Stage 1 intervention they are already receiving along with the addition of the other intervention (yielding combined medication + CBT)

Maintaining treatment assignment

Clinicians and participants will be coached at study entry that adherence to the study treatment assignment is essential, as a relatively small proportion of crossovers can be detrimental to the trial. Medication adherence will be assessed with pill counts at each clinic visit.

Blinding & Minimizing Rater Bias

Because PCORI guidelines preclude paying for any component of clinical care, insurance will need to pay for study treatments, which in turn will preclude blinding patients and clinicians to treatment assignment. Nevertheless, all study assessments have been selected as parent- and youth-reports that require minimal to no interactions with research staff, thereby minimizing or eliminating rater bias from study staff. It is in this sense that we designate this study “single blind”.

Rationale for selection of fluoxetine

We selected the SSRI fluoxetine rather than an SNRI because SSRI therapeutic response is significantly greater and faster [ 21 , 22 , 129 ]. SSRI treatment effects begin to emerge within 2 weeks, sooner with higher doses [ 21 ], and approximately 50% of overall treatment-related improvement at week 12 occurs by week 4 [ 21 ]. Fluoxetine’s track record and safety are well established, and it is on nearly all formularies. When considering which SSRI to use in this study, we considered fluoxetine, sertraline, and escitalopram because they all have FDA-approved indications in pediatric patients. We also considered offering a choice of medication in the medication treatment arms. Our concern with this approach, however, even from a very limited set within a class, was that it could introduce a source of variance in response or treatment adherence that could be difficult to disentangle from the effects of treatment assignment within the SMART design. Therefore, we decided to constrain medication use to a single agent.

Because of its risk of increased suicidality, which may be higher when compared to other SSRIs [ 166 ], and because it does not have any FDA-approved pediatric indications, we elected not to include paroxetine in this protocol.

Escitalopram

We did not select escitalopram because its efficacy as an antidepressant has not been demonstrated in patients under the age of 12 years [ 167 , 168 ], and the likelihood of a similar suboptimal response in childhood anxiety seemed high. Moreover, extant data concerning escitalopram’s pharmacokinetics suggests optimizing its efficacy may require twice daily dosing [ 169 ]. Most importantly, we are aware of no prior studies of the use of escitalopram in the treatment of pediatric anxiety disorders.

Sertraline was not selected as the SSRI that we would use in this trial, for several reasons. First, it does not have proven antidepressant effects in pediatric patients and is not FDA-approved in this patient population for this indication, suggesting that its efficacy may be limited in treating the most common comorbid psychiatric illness in pediatric anxiety. In addition, like escitalopram, sertraline may require twice daily dosing for optimal use at lower doses [ 170 ], which can adversely affect treatment adherence and complicate prescribing by pediatricians who are new to prescribing psychotropics.

We selected fluoxetine as the SSRI to be studied in our SMART design for several reasons. First, it has more data than any other agent to support its safety and efficacy as a treatment for pediatric affective illnesses (especially depression). In addition, the long half-lives of fluoxetine and its active metabolite (norfluoxetine) allow it to be given as a once daily dose [ 171 ]. Once daily dosing, compared with twice daily dosing, has been shown to improve medication adherence in the treatment of chronic psychiatric illness [ 172 ]. Moreover, fluoxetine uniquely has evidence to suggest that increased dosages may benefit those who do not respond to lower doses [ 173 ], and therefore we are allowing flexible dosing for fluoxetine in this protocol. Finally, extensive discussions with leaders of our pediatric primary care network have suggested that the once-daily dosing, the wide range of doses over which fluoxetine administration is deemed safe in pediatric patients, and the simple upward titration in 10 mg increments will facilitate the training and comfort of pediatricians in prescribing medication in this study.

Administration of Fluoxetine

Training in fluoxetine administration.

Prospective prescribers for the study will include child psychiatrists, developmental behavioral pediatricians, general pediatricians, and psychiatric nurse practitioners working in one of our study sites. They will undergo a 3-hour training for the study with two senior psychopharmacologists before being assigned any patients for treatment. Training will include an overview of study design, measures, inclusion and exclusion criteria, rationale for selection of fluoxetine as the study medication, and review of the pharmacology and drug interactions of fluoxetine, known side effects, assessment of side effects, reporting of adverse events, study dosing guidelines, remission criteria, and on-line training in the Columbia Suicide Severity Rating Scale.

Stage 1 Fluoxetine Dosing is flexible to maximize therapeutic effects while minimizing side effects, and based on literature for fluoxetine [ 174 – 176 ] and FDA regulatory approvals [ 177 ]. The study’s starting dose, and minimum permitted, is 10 mg/day; should that not be tolerated, the patient will be withdrawn from active treatment (but not from study follow-up). After 1 week at 10 mg/day, the dose will increase to 20 mg/day. After completion of week 4, 10 mg/day dose increases are permitted every other week as tolerated, up to a maximum daily dose of 80 mg/day. If the patient is not in remission and does not have dose-limiting side effects, written guidelines will encourage the prescribing physician to increase the dose of medication. If patients are on doses > 20 mg/day, the total daily dose can be prescribed either once daily or split into twice daily administrations. If dose-limiting side effects occur, dosages can be reduced by 10–20 mg/day. Patients can be prescribed a dose that previously was not adequately tolerated if: 1) at least 2 weeks have elapsed since the dose was reduced, and 2) the patient/parent agrees to re-try the higher dose.

Stage 2 Fluoxetine Dosing Most studies included in the meta-analysis showing that most improvements occurred in the first 6–8 weeks of SSRI therapy used fixed medication dosages, suggesting that further response optimization can be achieved with upward dose titration in Stage 2, when the initial treatment in Stage 1 does not achieve either maximum medication dosing or clinical remission. Approximately 40% of youth with pediatric anxiety disorders fail to respond to either SSRI or CBT [ 21 – 23 ]; rates of failure to achieve remission are even higher (60–80%) [ 21 , 22 , 24 – 27 , 30 ], and rates of relapse are very high (approximately 60%) [ 16 , 28 , 29 ], particularly in those who have more residual symptoms and functional impairment following acute treatment [ 29 ]. Therefore, the goal of achieving clinical remission is imperative in improving outcomes, which requires assertive optimization of medication dosing.

Per the above protocol, the fastest that patients can achieve a maximum dose of 80 mg will be week 15 of the study. In every pediatric psychopharmacology study published thus far that has used a flexible dosing schedule, however, the average medication dosages achieved have been considerably lower than the maximum dose allowed. Therefore, participants will not reach the maximum dose in the 12 weeks of Stage 1 of our SMART design. In addition, upward titrations will be slowed in some youth due to the emergence of side effects or because clinicians will be reluctant to increase the dose only 2 weeks after the last dose increase, instead wanting more time to observe for clinical improvement. The additional time in Stage 2 will allow higher overall doses to be achieved.

Therefore, optimization procedures in Stage 2 will be identical to those of Stage 1, though clinicians will be encouraged to make every possible effort to achieve the maximum dose of 80 mg unless or until remission is achieved. We will allow upward-titration in the presence of an inadequate clinical response, at evidence-based intervals; similarly, we will allow downward titration should dose-limiting side effects occur. Using these strategies, fluoxetine treatment will be optimized based on extant scientific literature in our pediatric population. Medication adherence will be monitored via pill counts at each clinical visit.

We will closely monitor medication dosing and patient tolerance during the first two cases assigned to each prescribing physician. Monitoring will be made possible through dosing and side effect data that a study coordinator will extract from the patient’s electronic medical record and enter into the study’s REDCap database. If the patient is tolerating the medication well and is still symptomatic, prescribers will be encouraged to increase medication dosage according to the written study guidelines. After completion of these first two cases, senior study psychopharmacologists will be available to prescribers for brief email or phone consultation if prescribers have questions about dosing or side effect management. This model of initial oversight in two cases and subsequent availability for brief consultation is intended to mirror the training and subsequent consultation that will be provided in the CBT treatment arms. It also adheres closely to real-world practice in psychopharmacology, particularly for general pediatricians.

Rationale for 12-Week Duration of Medication Study Stages and Evidence for the Benefit of Medication Optimization The vast majority of RCTs for medication therapy of anxiety and depression have been 12 weeks or less in duration [ 20 ]. For this reason, only a modest evidence-base exists on treatment beyond 12 weeks. For example, the duration of CAMS trial, which compared CBT with medication and the combination, was only 12 weeks long. Indeed, we have been able to identify only 2 RCTs using either an SSRI or SNRI for the treatment of pediatric anxiety that were longer than 12 weeks in duration (both were 16 weeks) [ 178 , 179 ]: in both the SSRI (paroxetine) [ 178 ] and SNRI (venlafaxine) [ 179 ] studies, mean therapeutic response did not improve from treatment week 12 to 16.

Furthermore, a meta-analysis has shown that the effects of SSRIs begin to emerge within 2 weeks of initiating treatment, and sooner with higher doses [ 21 ]. Approximately 50% of overall treatment-related improvement observed at week 12 occurs by week 4 [ 21 ], suggesting that the remaining 50% of improvement occurs over the last 8 weeks of treatment, approaching asymptote by week 12. Most of the studies included in the meta-analysis used fixed dosages. However, using fixed doses also suggests that further optimization of response can be achieved with upward dose titration in Stage 2, when the initial treatment in Stage 1 does not achieve either maximum medication dosing or clinical remission, and remission is what we are hoping to achieve with the optimization of medication therapy in Stage 2 of our SMART design. The goal of achieving clinical remission is imperative in improving outcomes, which requires assertive optimization of medication dosing.

CBT stage 1 implementation

We will use the Coping Cat program as the behavioral intervention for this study. Coping Cat is an established evidence-based CBT treatment for pediatric anxiety [ 180 – 182 ] that has been studied rigorously for more than 25 years. The CBT strategies that form the core of Coping Cat have been subjected to years of extensive research through clinical trials [ 183 , 184 ], have been shown to be highly efficacious in addressing symptoms of a range of anxiety disorders, and are widely available at low cost. It is delivered in individual therapy sessions with anxious children.

Coping Cat comprises 4 core functional components ( Table ​ Table1 1 ) :

  • Building a Collaborative Working Alliance: recognizing and understanding the emotional and physical reactions to anxiety;
  • Undergoing exposure without Avoidance: clarifying thoughts and feelings in anxiety-provoking situations;
  • Developing Coping Efficacy: developing plans for effective coping (e.g., modify anxious self-talk into coping self-talk, or determine what coping actions might be effective);
  • Engaging in Reward: evaluating performance and giving self-reinforcement.

The Coping Cat workbook is used for children aged 8 to 13 years, and the parallel C.A.T. Project workbook is used for ages 14 to 17. For both age groups, the sequence of sessions follows the same structure: Introduction, including psychoeducation and development of an individualized anxiety hierarchy; Skills Building, including relaxation training and coping skills; and Experiential Practice, including exposure and practice of coping skills, moving from the least to most anxiety-provoking situations from an individualized anxiety hierarchy. Coping Cat will be delivered over 12 weekly therapy sessions in Stage 1 of our SMART study with minimal modification to the original Coping Cat protocol.

Coping Cat therapists will be those who normally provide therapy in the study sites; these include licensed mental health professionals (e.g., psychologists, social workers) and trainees (e.g., psychology interns, fellows). All will undergo Coping Cat training and participate in ongoing consultation and fidelity measurement. To promote optimal representativeness and external validity, there will be no requirements for prior training; Coping Cat training has demonstrated effectiveness with clinicians ranging from graduate students [ 180 , 181 ] to experienced psychotherapists [ 49 ], with strong treatment effects in each case.

Adaptations to Coping Cat

We have included very few planned adaptations to the form of Coping Cat administration in this trial. In Stage 1, clinicians will deliver 12 sessions of Coping Cat per the manualized protocol. These 12 sessions will be condensed from the original 16 Coping Cat sessions by consolidating the psychoeducation and skills building sessions, adaptations that have been designed in consultation with the developer for Coping Cat . An additional, minor planned adaptation will be that the “final” session 12 activities will not be framed as termination, but rather as taking stock of progress, review of skills learned, and preparation for Phase 2. CBT optimization in Phase 2, will involve more extensive adaptations to the Coping Cat protocol and will reflect an intensification of the CBT maintenance treatment protocol from the CAMS trial. In the post-acute phase of CAMS, patients in the CBT arm were offered maintenance treatment “designed to reflect the manner in which the active CAMS treatments most appropriately be delivered in clinical settings.” This maintenance treatment consisted of 6 additional CBT sessions to be delivered over 6 months, focusing on additional exposure practice and coaching in the application CBT skills to emerging life stressors [ 42 ]. The content of these CAMS maintenance sessions directly reflects the content we have incorporated into our Stage 2 CBT optimization phase, namely, continued intensive exposure practice and skills review. Our planned adaptation is designed to intensify the maintenance protocol from CAMS by increasing the total number of sessions (from 6 to 12) and the density of delivery (weekly versus monthly) and thus to optimize the dose of exposure learning.

In addition to our two planned adaptations to the Coping Cat protocol, unplanned adaptations may occur across arms. Unplanned adaptations may include deviations to address crises that emerge (e.g., case management, suicide risk assessment) and necessary adaptations to content to meet developmental or cultural diversity needs, among others. Any deviations from the manualized structure and content of Coping Cat will be recorded via a session adherence form completed by the clinician and presented/discussed at study team meetings. The CBT leadership team will decide how to address and formally record adaptations for the purposes of study adherence measurement, data analyses, and process evaluation, and will also provide feedback to CBT clinicians as necessary to prevent unnecessary deviations from the treatment protocol. Note that even within efficacy trials for CBT, crisis management sessions are built into the design of the protocol, as they represent good clinical care, compliance with legal requirements (e.g., mandated reporting), and appropriate adherence to evidence-based treatment [ 42 ].

Risk of early termination of CBT treatment

Many interventions for internalizing disorders do not have immediate effects; indeed, families are routinely told that symptom improvement is unlikely during the first several weeks of SSRI treatment. Similarly, as part of the CBT Coping Cat intervention, patients and families are provided psychoeducation at the beginning of the protocol describing CBT as a skill-building intervention that will work through repeated practice. As such, the CBT model also does not prime patients or families to expect immediate improvement. Nevertheless, in a relevant prior study [ 49 ], 139 children were randomized to 12 weeks of Coping Cat treatment, and none of those children withdrew from treatment during the entire 12 weeks (though 6 (4.3%) were lost to the study). This suggests that the risk of early withdrawal from CBT is low and that the number will likely be small. Moreover, these prior findings mirror service utilization data from our treatment sites.

CBT stage 2 implementation

For those participants randomly assigned to continued/intensified CBT for Stage 2, weekly CBT will continue for an additional 12 weeks. The protocol for CBT optimization is a planned adaptation of the CBT maintenance treatment delivered in the landmark CAMS anxiety efficacy trial [ 49 ]. In the post-acute phase of CAMS, patients in the CBT arm were offered 6 additional CBT sessions to be delivered over 6 months, focusing on additional exposure practice and coaching to maintain application of CBT skills in the face of emerging life stressors [ 42 ]. The content of these CAMS maintenance sessions directly reflects the content we have included in our Stage 2 CBT optimization phase, namely, exposure and skills practice. However, our planned adaptation is designed to intensify the maintenance protocol from CAMS by increasing the total number of sessions (from 6 to 12) and the frequency of delivery (weekly versus monthly). Furthermore, the goal of the 12 sessions of optimization is not simply to maintain gains from the end of Stage 1, but rather to produce substantial additional clinical change in Stage 2. As such, exposure practice in Stage 2 will consist of intensification of practice exercises, moving up the patient’s hierarchy to tasks of increasing difficulty, and promoting patient mastery. It will also involve consolidation and review of previously learned CBT coping skills. This approach corresponds to what would normally occur in clinical practice if a patient did not fully respond to the initial acute phase of Coping Cat .

No new CBT techniques will be introduced in Phase 2. This content directly parallels the content of CBT maintenance treatment from CAMS, but the Phase 2 optimization sessions in this trial are delivered at a higher dose and density than the maintenance sessions prescribed in CAMS. This planned adaptation is designed to optimize response in our sample of diverse, underserved and clinically complex youths through the mechanism of increasing the dose of exposure learning. This approach corresponds to what would normally occur in clinical practice if a patient did not fully respond to the initial acute phase of Coping Cat .

Rationale and evidence for the benefit of CBT optimization in stage 2

The rationale for our optimization protocol comes from three sources. First, the content and general structure of the sessions are drawn from the CBT maintenance protocol of the gold-standard CAMS anxiety trial [ 42 ]. In CAMS, this was “designed to reflect how the … treatment would be delivered in clinical settings” over an extended care time frame and focused on exposure practice and coping skills review, without the addition of other new material [ 49 ]. Thus, additional enactive practice was emphasized as the major clinically relevant task for optimizing CBT in our protocol. Second, analyses of process data from the CAMS trial suggest that quality of exposures may be a central element in therapeutic change for anxious youths in CBT. Amount of time spent in session on exposures, focus on mastery of difficult exposures, and child adherence with tasks and mastery of skills -- all these elements of treatment delivery predicted better clinical outcomes in the CAMS trial [ 119 ]. Our CBT optimization protocol thus focuses on increasing the dose and intensity of exposure learning for youths. Third and finally, our focus on practice and intensity was informed by previous findings suggesting that disadvantaged youth may have lower engagement in key tasks of CBT and may have less mastery of material, leading to poorer outcomes. In CAMS, Black/African-American youths demonstrated lower attendance at CBT and medication management sessions, and they were rated by therapists as exhibiting less involvement and compliance with treatment. Perhaps as a consequence, they also showed a lower level of mastery of CBT concepts [ 41 ]. Statistical control for these process factors and SES eliminated racial differences in outcome. Similarly, patient non-adherence (poor attendance, low homework completion, poor compliance in session) was associated with number of parents present in the home (with the best outcomes for two-parent families), although indices of non-adherence varied in their power to predict clinical outcomes [ 124 ]. Our SMART design is intended to fill major gaps in the evidence base on the sequencing and optimization of CBT and medication treatment and, critically, to do so in an underserved population. Given these process-outcome findings related to engagement, we sought to level the playing field for disadvantaged youths by having our optimization protocol focus on mastery of skills as the goal of Stage 2 CBT and to provide an extended set of sessions on this topic in order to increase the likelihood that all youths will receive a high-dose of care.

Addressing non-responders’ willingness to continue CBT in stage 2

Relevant data come from the analysis of CAMS outcomes at week 24 and 36 [ 42 ]. In Phase 1 of the CAMS anxiety trial, youths were randomized with either CBT, SSRI, Combination (COMB), or placebo and clinical outcomes assessed at 12 weeks. After this acute phase, in Phase 2, responders were offered 6 months of continuing monthly maintenance treatment in their originally assigned condition. Non-responders in the treatment arms were referred to community providers for general outpatient treatment, and non-responders to the placebo condition were offered their choice of active study interventions. During Phase 2, outcomes were assessed at week 24 and 36, mapping onto our post-intervention assessment for the second half of our SMART study. In the CAMS sample, youths had excellent retention over this follow-up period, with nearly 80% of youths completing study assessments. Of note, the CAMS authors did not report evidence of differential study attrition by assigned study condition, participation in maintenance treatment, or responder status. Thus, non-responder youth assigned to continued CBT were just as likely to remain in treatment (with high rates of retention) as youth assigned to a new, active intervention. These findings suggest good acceptability of clinical recommendations for the treatment path following an acute phase of intervention.

Process evaluation

Our planned process evaluation will use quantitative and qualitative methods to assess treatment fidelity and related attributes (e.g., dose), and to study and assess the hypothesized causal pathways leading from our interventions to patient outcomes ( Fig. ​ Fig.1 1 ) . We describe here details of our approach to fidelity assessment and planned mediation analyses. We will supplement these with qualitative process evaluation activities using data from interviews and surveys.

Treatment modality

Process evaluation of treatment modality in our causal pathways will focus on evaluating the fidelity to medication and CBT treatment. Medication treatment fidelity will be measured by sessions attended and pill counts at each session. CBT treatment fidelity will be measured by session attendance and assessing the extent to which the 4 core functions of Coping Cat CBT are accomplished. Fidelity to the “forms” will be measured via specific Coping Cat fidelity measurement activities (below). We will also assess the youth’s acquisition of each core function of the program (i.e. alliance-building, skills building, exposure, reward) via objective measurement as well as qualitative assessment. The assessment of both function and form, and subsequent comparison of the relation between core functions and the forms used to achieve them will enable the study to inform whether the core functions of the treatment are achievable via various alternative pathways (e.g., forms).

CBT training and quality assurance

CBT training and consultation will involve an initial 2-day training workshop for all study therapists. The first day of training will cover the theoretical foundation and foundational skills for CBT for anxiety disorders. Then training will move to the specific structure and content of the Coping Cat (including C.A.T.) program and guide clinicians through the manual in detail, using case examples, video, and trainer modeling. The second day of training will involve extensive education and training in exposure, including theoretical background, practical implementation, problem-solving barriers, and modeling/role play by trainers as well as role play opportunities and feedback to participants. The training will also briefly summarize the evidence-base for Coping Cat , as well as review in-depth our study design and quality control procedures.

Once clinicians have completed the initial training workshop, they will be assigned to a consultation group with an experienced CBT clinician. Our training plan reflects real world practice in that we do not require portfolio submission and fidelity-rated recordings in order to “graduate” as a Coping Cat therapist. Instead, clinicians will be required to receive weekly consultation on their first two cases, and record their sessions for retrospective fidelity measurement. CBT trainers will be available throughout the study to CBT therapists for questions they may have in implementing Coping Cat with subsequent patients. If clinicians are trained in both Coping Cat CBT and fluoxetine administration, their caseloads will be balanced with patients from each arm of the study to control for any possible clinician-level effects.

CBT fidelity measurement

All CBT therapists will be provided audio recording devices and asked to audio-record their sessions. We will randomly select 1 study case from each clinician (assigned after their initial 2 training cases); a trained fidelity rater will listen to all 12 sessions of this case and rate fidelity using a standardized fidelity checklist developed for Coping Cat [ 180 , 181 ]. These fidelity rating checklists will allow raters to record the delivery of treatment components during each session and calculation of percent fidelity to the treatment model for each CBT therapist and overall, across all therapists in the study. We will also include a measure of general therapy competence that has been used with Coping Cat and can help assess the quality of treatment delivery, including exposure practices [ 185 ].

With the Coping Cat fidelity checklist used to measure clinician adherence to Coping Cat , and required record-keeping for number of sessions completed, it will be possible to conduct analyses of the association between Coping Cat adherence and dose, on the one hand, and improvement on patient centered outcome measures of anxiety symptoms on the other. In addition, given the central role of exposure and tolerance of negative affect in the Coping Cat conceptual model, and following procedures used in the CAMS study [ 119 ], we will have therapists complete brief reports following each session in which they rate the child’s overall adherence with treatment procedures, mastery of the information/skill covered in the session, and they will provide 3 kinds of information related to exposures: (1) the number of exposures used in the session, (2) difficulty level of the exposures (based on the child’s ‘subjective units of distress’ ratings), (3) level of skill/mastery shown by the child during the exposures. This will permit us to develop an index of tolerance of exposure and exposure mastery shown by the child, on the one hand, and patient-centered outcomes (Table ​ (Table2) 2 ) on the other.

Long-term follow-up

Recruitment will occur over the first 2 years. The last patients recruited will complete the 24-month trial at 2.5 years after study initiation, leaving a minimum of 12 months follow-up duration for those recruited last. We will continue following patients recruited earlier to provide longer-term follow-up data. Assessments will occur quarterly and will be identical in content to those obtained during the 24-week trial. During follow-up, patients and their clinicians will be able to select whatever treatment they wish, and at whatever frequency and intensity deemed desirable; insufficient responders or those who relapse during the follow-up period may switch to another medication or psychosocial therapy at the treating clinician’s discretion. We will encourage clinicians and patients to continue with successful treatments begun during the trial, however. Booster CBT sessions will be provided as needed, as occurs in regular (good) clinical practice. These sessions will not be scheduled at regular intervals but will instead occur in response to patient need and according to the provider’s clinical judgement. All booster sessions will involve review of the youth's FEAR plan developed during the initial phase of treatment and will use this plan as a base to address residual or recurring issues, avoidance, and functional impairment. The number of booster sessions, other treatments delivered, and any changes (including discontinuations), will be tracked and recorded carefully for consideration in analyses.

Adverse events (AEs)

These will be elicited by a combination of structured questions and direct, open-ended inquiry of patients and parents in each treatment arm of the study using the Pediatric Side Effect Questionnaire (a modified version of the Antidepressant Side Effect Questionnaire) at weeks 6, 12, 18, and 24 of the study. Clinicians will be requested to complete this form at each clinical visit; the form will be placed in the clinic chart, which a coordinator will then extract and enter into the study’s REDCap database. We will use the FDA definition to define a serious or unexpected AE [ 186 ]; for each reported AE, we will document whether it meets regulatory criteria for a serious or unexpected AE, and the presumed relationship to study procedures. Federal guidelines and recommendations for reporting serious or unexpected AEs to site IRBs and study’s DSMB will be followed [ 186 ]. Height, weight, blood pressure, and pulse will be assessed at each in-person study visit, and we will request that parents measure all except blood pressure for each telehealth visit [ 177 ]. The development of suicidality during treatment is a possibility; participants will therefore be assessed at each study visit with the Columbia-Suicide Severity Rating Scale (C-SSRS) [ 104 ]. Patients will be withdrawn from the study if continued participation is deemed unjustified due to risk of self-harm, and each study site will take appropriate measures to ensure safety.

In-person and telehealth treatment

Reflecting the values of real-world implementation at the heart of PCORI’s mission, we will allow in our study the delivery of care through either in-person visits or telehealth, as routinely conducted at each of our performance sites. This modification is motivated in part by the conversion of all mental health care across our sites to telehealth to comply with safety considerations during the COVID-19 pandemic. In addition, several of our performance sites, even before the pandemic, have long histories of conducting pharmacotherapy entirely by telehealth, given the limited child psychiatry support available to cover wide geographic service regions. Moreover, the provision of psychotherapy by telehealth was accelerating at most of our sites before the pandemic. All our sites at the time of study initiation are providing all non-emergency mental health care in this medium. Following resolution of the pandemic, we anticipate that delivery of care in our study will include both telehealth and care that is delivered face-to-face. We will document for each treatment session whether it was conducted in person or via telehealth, whether the patient’s video platform was a computer or a smartphone, and whether any technical difficulties in implementation were encountered.

For patients receiving treatment via telehealth, we will assess the acceptability of telehealth treatment delivery using structured questionnaires administered to patients, parents, and clinicians after weeks 1 and 12 in Stage 1 of the study. These structured questions have been adapted from questions found in a detailed review of past studies of acceptability and feasibility for telehealth treatment delivery to either youth or adults [ 187 – 194 ], as well as from our initial meetings discussing telehealth experiences with patient, parent, and clinician stakeholders. We will also conduct a process evaluation using exit interviews of each of these 3 individuals (patient, parent, clinician) after week 12 using open-ended questions. This combination of structured and open-ended inquiries will provide a rich and comprehensive understanding of patient, parent, and clinician experiences with telehealth in our study.

Methods to prevent and monitor missing data

We will use several strategies to prevent and reduce missing data, particularly trial outcome measures. We will use REDCap [ 165 ] for data collection and entry. REDCap is a secure, HIPAA-compliant, widely used web-based research application that supports calculations and branching logic programming. It provides (1) an intuitive interface for validated data entry, (2) audit trails for tracking data manipulation and export, (3) automated export procedures for seamless data downloads to common statistical packages, and (4) tools for importing data from external sources. We will program our REDCap database to require responses to all survey and interview questions and real-time documentation of reasons for missing data. Reasons for truly missing data (e.g., if participants are uncomfortable completing specific items) will be completed at the time of data collection and documented during data entry. This will also reduce missing data due to data entry errors. The trial data management protocol will include immediate post-data entry notifications, tracking of rectifications of missing data, and daily review of new data entries with immediate notification to data entry personnel and site coordinator for rectifications needed for missing data. We will attempt to complete trial outcomes at protocol-specified assessment times on all randomized persons, even those who have dropped from the study. Our project analyst will review data weekly to ensure completeness; a weekly report to investigators and study coordinators will identify outstanding missing data that needs to be completed, as well as a time field indicating number of days since the missing data query was first generated. The study protocol will provide instructions for use of these strategies and procedures.

Data management

We will employ a detailed data management plan to ensure the integrity and accessibility of study data. We will store all data on secure servers in an integrated REDCap database [ 164 , 165 ]. All clinical, behavioral, and demographic information for each participant will be entered into the database. We will use REDCap to create a web-based data entry interface, perform Stage 1 and Stage 2 randomizations, update participant information, manage baseline and follow-up visits, and track participant status, and to export data efficiently in various self-documenting forms directly into SPSS, SAS, R, or Excel using REDCap’s de-identification option, thereby ensuring that exported data are complete, self-explanatory, and de-identified. We will also design and implement a database system to acquire and integrate all paper and electronic files from participant assessments. We will implement procedures to maintain data integrity, reliability, security, and accessibility, ensuring that all data are consistent with HIPPA and other federal regulations and with applicable policies of our local IRBs. We will ensure that all participant data have been transmitted and incorporated into the central database, and fully documented, in a timely manner. We will also manage and document data requests, make data available to investigators through a Data Use Agreement, and maintain archives of all analysis datasets and datasets provided to other investigators.

Tracking system

The study's REDCap database will aid study coordinators and participating sites in tracking participant visits and data collection. The password-protected, structured web-based portal will include real-time reports to identify participants who are due for a follow-up visit, list the type of data to collect in that visit, and list any information that has been missing from past visits. All data modifications will be logged to maintain accountability for all entries and edits. Automatic monthly emails will be sent to investigators and coordinators alerting them to upcoming visits of their participants.

We will ensure the database systems, data access policies, and data transmission protocols exceed HIPPA regulations and data security standards for IRBs. Only designated staff will have direct access to the database. The front-end web portal to the database will be accessible to all investigators and study coordinators. The REDCap server is housed behind multiple firewalls in a locked and guarded USC data center equipped with security cameras and intrusion detection systems that is staffed by security at all times. All electronic connections to the REDCap environment are encrypted. Only system administrators at the data center are authorized to access the back-end database server directly by logging into a virtual private network. We will institute policies that (a) temporarily deactivate user accounts that have not logged into the system within a specified time; (b) automatically require staff to change passwords at regular intervals; (c) match current users against current study staff, and terminate user accounts for staff who no longer are associated with the project; and (d) institute independent audits at regular intervals by our ISO to assess HIPPA compliance and security. All data files transmitted to investigators will be encrypted and password-protected at the highest level of data encryption, then transmitted via a secure File Transfer Protocol. The password used to encrypt the file will be transmitted separately from the file.

Sample size and power

SMART designs are commonly but incorrectly assumed to require prohibitively large sample sizes [ 54 ]. It is common to think, for example, that data from a study design such as ours will be analyzed in a 6-way ANCOVA that compares outcomes across all 6 subgroups, which would indeed likely require a large sample size. That analysis, however, does not correspond with our primary aims and hypotheses, which are tested as two main effects and their interaction, and which require a sample size considerably smaller than for a 6-way ANCOVA [ 54 ]. We have made it our priority to identify a sample size that will allow us to test Stage 2 effects with sufficient power. We obtained the estimates of effect size (ES, in SD units) required for estimating our required sample size as follows:

(1) We used treatment ESs from the CAMS efficacy trial, which compared combined medication+CBT to monotherapies (medication alone, CBT alone) and placebo. CAMS reported quantitative anxiety outcomes using the Pediatric Anxiety Rating (PARS) scale. Outcomes did not differ significantly in CBT alone vs medication alone (though group mean differences on the PARS slightly favored medication over CBT). However, CAMS did find large ESs for combined medication+CBT compared to monotherapy, with a somewhat larger difference compared to CBT alone than medication alone.

(2) Using augmented CBT in CBT non-responders as the base, relative ESs reported at 24 weeks in CAMS were used as ES estimates for CBT + medication (i.e., medication added to CBT), medication alone, and medication+CBT (i.e., CBT added to medication). For Stage 1 remitters, we used an ES of 0.2 above their non-remitter counterpart.

(3) Group main effect ESs were then computed for a Stage 1 remission rate of 40%. Our estimated Stage 1 remission rate of 40% is a conservative estimate based on the CAMS remission rate of 35% achieved for ethnic minorities, when applying remission criteria less stringent than ours [ 30 ]. In CAMS, the 12-week remission rate was also approximately 10% higher in medication-only compared to CBT-only. We used this 10% difference in these calculations to compute ESs and resulting sample size estimates. For Main Effect 1 (start with medication vs start with CBT), ESs for all 3 medication or CBT groups (1: remitter; 2: non-remitter➔optimize monotherapy; 3: non-remitter➔medication+CBT) were computed as a weighted average of the group-specific ESs. Weights were the remission rate, with the non-remitter weight (1 minus remission-rate) split equally among the 2 randomized non-remitter groups (reflecting 1:1 Stage 2 randomization among non-remitters). For Main Effect 2 (among Stage 1 non-remitters, optimize Stage 1 treatment vs add the other treatment), ESs were computed as a weighted average of the ESs for monotherapy optimizers (medication+, CBT+) and a weighted average of the combined (medication+CBT, CBT + medication) groups. These weights were again 0.5*(1- remission-rate).

[ 4 ] We then computed the sample size required to detect Main Effect 2 (optimize monotherapy vs add the other therapy in non-remitters) at 80% power and tested at a 2-sided alpha = 0.05. From our computations above, the Main Effect 2 ES was approximately 0.40 SD, requiring a sample size of 194 participants among Stage 1 non-remitters. Increasing the sample size of 194 by the non-remission rate (N divided by non-remission rate) and by the estimated 20% dropout rate, yielded a sample size of 404 (202 per group) for Stage 1 remission rates of 40%. Randomizing 404 participants, with an estimated 324 finishing the trial and a remission rate of 40%, will provide the ability to detect an overall Main Effect 1 (medication first versus CBT first) ES of ≥0.31 SD with 80% power.

Power analyses for telehealth-based treatment delivery

The above effect size estimates used the CAMS 24-week trial results to estimate comparative effect sizes for the 6 groups (remitters, non-remitters augment Stage 1 intervention; non-remitters add intervention). Two RCTs provide data to estimate effect size estimates for treatment delivery via telehealth. The first [ 195 ] is a meta-analysis of 26 RCTs comparing videoconferencing-based telehealth vs in-person delivery of psychiatric services, which included both pharmacotherapy and psychotherapy. The summary effect size (telehealth vs in-person) = − 0.11 (95% CI -0.41, 0.18), a non-significant difference with the direction of effect favoring tele-health delivery. The second [ 196 ] is an RCT comparing telehealth vs in-person CBT over 24 weeks (the same duration as our study) in 115 youth with adolescent anxiety disorder. For consistency with our sample size calculations (which used the primary outcome from the CAMS trial, the clinician-rated Pediatric Anxiety Rating Scale), we estimated the effect size for the Clinician Severity Rating outcome measure, which was 0.05 (95% CI -0.77, 0.95). This is a non-significant difference, with the direction of effect favoring in-person CBT. We used these estimated in-person vs telehealth effect sizes to estimate adjustments to our original effect sizes, adding 0.11 to all of the medication group effects (medication remitter, medication non-remitter: augment, medication non-remitter add CBT), and subtracting 0.05 from all the CBT group effects (CBT remitter, CBT non-remitter: augment, CBT non-remitter: add medication). We concluded that there is no effect on the overall main effect 2 effect size used to estimate sample size, because the main effect 2 comparison is: Augment (Medication+; CBT+) vs Add (Medication, add CBT; CBT, add Medication), and the adjusted remote effects appear in both Augment and Add groups, so these effects cancel out. The same conclusion is obtained if we separately consider ONLY a remote medication effect, or ONLY a CBT effect.

Data analysis plans

Upon completion of the final trial participant, we will complete a final database review and finalize data queries, then lock the trial database for final analysis. To evaluate baseline comparability, we will compare demographic and clinical characteristics, and trial outcome measures, at baseline between treatment groups. Continuous baseline measures will be summarized by mean (SD) or median (IQR), and categorical measures by frequency (percent). Standardized group differences will be computed and presented for all baseline variables.

We will conduct an intent-to-treat analysis, by which subjects will be analyzed according to randomized intervention, consistent with standard practice in clinical trials. Distributions of outcome variables will be graphically examined; normalizing transformations will be applied if needed. The 24-week continuous measures of primary and secondary trial outcomes will be compared between treatment groups using general linear models, with the 24-week measure as the dependent variable. SMART design groups will include Stage 1 randomized group and Stage 2 randomized group. The randomization stratification factors (age group,clinical site, and dichotomized baseline parent SCARED-41), as well as baseline measures of the trial outcome (Youth SCARED ratings), will be included as model covariates. Results will be summarized by treatment group means (SDs) and mean treatment group differences (and 95% confidence intervals). No interim analyses are planned.

To test Main Effect 1 , a 2-group (Stage 1 randomization) analysis of covariance will be used, comparing subjects randomized to medication first to those randomized to CBT first [ 53 , 54 , 197 ]. To test Main Effect 2 among Stage 1 non-remitters, a 2-group analysis of covariance will also be used, comparing (a) non-remitting subjects randomized at Stage 2 to optimization of their Stage 1 monotherapy, vs (b) non-remitting subjects randomized at Stage 2 to combination treatment (medication+CBT or CBT + medication) across levels of Stage 1 randomization. Finally, to test whether one sequence of treatment modalities (CBT➔CBT; CBT➔med; med➔med; med➔CBT) is significantly better or worse than predicted from the two main effects, an interaction term of Stage 1 randomization with Stage 2 randomization will be added to the model. The addition of the interaction term will allow estimation and testing of each of the 4 treatment sequences [ 53 , 54 , 197 ].

To reflect the SMART design, two additional analytic issues must be addressed. First, the comparison of 4 sequences requires replication of the Stage 1 remitters, to reflect these subjects’ contributions to both the Stage 2 augmentation and combination treatment strategies (i.e., each Stage 1 remitter contributes 2 observations to this analysis). Second, to reflect the fact that Stage 1 remitters are only randomized once, whereas Stage 1 non-remitters are randomized twice, regression weights will be used, such that each subject is weighted by the inverse probability of ending up in the sequence to which they were randomized (weight of 2 for remitters, 4 for non-remitters). As the Stage 1 remitters contribute two observations to these analyses (and thus correlated outcomes) in comparing treatment sequences, a sandwich-based variance estimator will be used to obtain robust standard errors for effect estimates [ 53 , 54 ].

Durability of the 24-week intervention

This will be assessed using the repeatedly-measured outcomes collected quarterly for 12 months following end of the trial intervention, following the theoretical and simulation approaches and results provided by Lu et al. (2016) [ 198 ] and Li (2017) [ 199 ]. These data will be modeled using marginal means models with generalized estimating equations, with weighting and replication of observations based on stage 1 responder status, as recommended for SMART designs. Using this approach, the post-intervention linear slopes as well as end-of-study absolute values in trial outcomes will be compared among the adaptive treatment strategies (assessing group-by-time interaction terms); we will also consider possible non-linearities in post-intervention trajectories with addition of polynomial terms for time of assessment. Adverse events will be categorized using the MedDRA coding system and compared between treatment groups using exact methods for comparisons of proportions.

Multiplicity

The primary outcome (Youth SCARED) will be tested at an α = 0.05. Evaluation of secondary, exploratory, and all subgroup analyses will control for the false discovery rate [ 200 ].

Sensitivity analyses

These will assess the impact of analytic and modelling assumptions related to selection of covariates, handling of missing data, and adherence to randomized intervention. They will compare parameter estimates and statistical conclusions of treatment group differences on outcomes to study the differential impacts of various assumptions, and will include: (1) additional baseline covariates that differ between groups, evidenced by a standardized difference ≥ 0.1; (2) trial dropouts in the 24-week analysis, using multiple imputations to attain 20 complete trial datasets and summarizing treatment effects over the repeatedly imputed datasets; (3) an adherence-based analysis, limiting analyses to subjects who participated in ≥80% of planned intervention contacts and took ≥80% of medication doses based on pill counts..

Effects of treatment modality

We will evaluate the association of percentage telehealth sessions with session attendance and completion of the trial. We will also assess the influence of telehealth delivery variables (whether sessions were conducted in person or via telehealth, whether the patient’s video platform was a computer or a smartphone, and whether any technical difficulties in implementation were encountered) on treatment outcomes by including them as covariates in our sensitivity analyses.

Statistical methods to address missing data

Trial dropouts will be reported by follow-up visit and summarized by reason for dropout. Missing data for trial outcomes will be analyzed to provide insight on the missing data mechanism. We will inspect patterns of missing data by each randomized group. Baseline characteristics will be compared between participants with and without complete outcome data. Participant characteristics found to be related to missingness will be correlated with values of the outcome variable (baseline values and among participants who have complete data); we will evaluate the relationship of baseline values of the outcome variable to missingness at each follow-up. Following the recommendation of Shortreed et al. (2014) [ 201 ], we will employ conditional imputation models for missing data that take advantage of the time- and stage-ordered nature of the trial design. At each time point, missing data (outcomes and covariates) will be imputed using baseline, outcome measures, randomized treatments, and remission indicators measured prior to the time of the missing data collection. For participants who are lost to follow-up prior to the Stage 2 randomization, we will perform a single imputation, assigning a status of non-remitter (i.e., non-responder) at 12 weeks. Multiply imputed datasets will then include an imputation for the missing Stage 2 randomization and imputations of subsequent outcomes. We will document and appropriately follow participants depending on their type of dropout (e.g., dropout from study only but maintaining physician/site service, vs dropout from study and service). Dropouts and reasons will be summarized in the final trial CONSORT diagram. Sensitivity analyses will compare results (treatment means, treatment group differences, and statistical conclusions regarding group differences) from models with only complete cases with results from models incorporating missing data through multiple imputation.

Heterogeneity of treatment effects (HTE)

Our study data will include a rich body of information on tailoring variables , including baseline individual, family, and context characteristics, some related directly to child anxiety and its clinical portrait: past treatment response, family history of anxiety, SES, and variables that are potentially modifiable , such as overall symptom severity [ 29 , 31 ], functional impairment [ 29 , 31 ], severity of depression and other comorbid illnesses, treatment fidelity and adherence, medication dose, treatment setting (community or university; primary pediatric or specialty mental health clinic), and parental depression [ 111 ] or anxiety [ 112 , 202 ]. These tailoring variables, used in post-hoc analyses, will help shed light on how patterns of response through the various pathways of this SMART study may relate to individual, family, or context characteristics at the beginning of treatment. One important product from this trial will be the development of a prospective, adaptive intervention algorithm – a set of tailored clinical pathways based on (a) participant characteristics at baseline and (b) response to intervention after acute intervention. This adaptive treatment algorithm will have immediate clinical applicability in populations of diverse and vulnerable youth.

HTE analysis goals: pre-specified hypotheses and supporting evidence base

Based on findings from prior studies, we hypothesize that predictors of poor acute treatment outcomes will include lower SES [ 31 , 32 ], ethnic minority status [ 30 , 41 ], comorbid depression [ 203 ], a diagnosis of social anxiety disorder [ 28 , 31 , 32 , 204 ], and treatment in a pediatric rather than a specialty mental health clinic [ 31 ]. More severe anxiety will predict better response to Stage 2 combined CBT + fluoxetine treatment than either treatment modality alone [ 49 , 50 ].

HTE analysis plan

Dividing the total sample into pre-defined subgroups, we will use analytic methods detailed above to estimate treatment effects across subgroups. Forest plots of mean treatment group differences with confidence intervals for each subgroup will be completed to graphically evaluate uniformity of treatment effect. In the total sample, we will first add the subgroup as a main effect covariate. Evaluation of the subgroup main effect will test over the entire sample (combined intervention groups) whether and to what extent outcomes in general differs by SES, ethnicity, comorbid depression, social anxiety disorder, and treatment setting.

We will employ Q-learning as our primary approach to testing the heterogeneity of treatment effects. This is a regression approach recommended for SMART data to identify tailoring variables that modify treatment responses and suggest enhancements to the sequential decision-making of an adaptive intervention [ 197 ]. In this 2-stage SMART design with a continuous trial outcome, Q-learning with linear regression will be used in 2 steps. In step 1, the Stage 2 decision rule will be optimized among Stage 1 non-remitters, identifying individual variables that significantly modify the Stage 2 randomization effect. In step 2, the Stage 1 decision rule will be optimized, controlling for the optimized Stage 2 intervention and identifying individual variables that significantly modify the Stage 1 randomization effect. This approach therefore may suggest a more tailored adaptive intervention that could be evaluated in a future SMART study.

As there will remain clinical subgroups of primary interest to clinicians, we will also assess in a secondary analysis the randomized treatment-by-subgroup (e.g., treatment-by-SES group) interaction terms to estimate and test for differences in treatment effects (Main Effects 1 & 2) by subgroup. Estimating treatment effects across our carefully pre-defined subgroups, we will use forest plots of mean treatment group differences with confidence intervals for each subgroup to show subgroup-specific effects and graphically evaluate uniformity of treatment effects.

With an estimated 20% dropout, we will be able to detect the following effect sizes for Main Effect 1 (total n  = 324) in various subgroups by sample representation (n): (1) Effect Size = 0.40 for 60% representation ( n  = 194); (2) Effect Size = 0.44 for 50% representation ( n  = 162); (3) Effect Size = 0.50 for 40% representation ( n  = 130). For Main Effect 2 (total n = 194), we will be able to detect: (1) Effect Size = 0.52 for 60% representation ( n  = 116); (2) Effect Size = 0.58 for 50% representation ( n  = 96); (3) Effect Size = 0.64 for 40% representation ( n  = 78). Subgroup interactions will be tested formally; with the total anticipated sample size of 324 to complete the 24-week intervention, we will have 80% power to detect subgroup differences in treatment effect sizes of ~ 0.65 and higher for Main Effect 1, and ~ 0.8 and higher for Main Effect 2.

In additional exploratory analyses, we will use latent class/profile analysis to model heterogeneity of outcome response patterns, assuming categorical latent variables (latent groups) for response. Latent class/profile analysis has been proposed as an alternative to subgroup analysis in clinical trials. As an exploratory analysis, it uses observed data (e.g., clinical characteristics, randomized treatments) to identify latent group classifications that may suggest individual characteristics related to greater responses to treatment [ 205 ].

In addition to assessing the effects of telehealth delivery of treatment on outcomes, we will assess whether the percentage of treatment sessions conducted via telehealth moderates treatment outcomes. This test will tell us whether discontinuities in telemedicine use over the course of the study or within individual patients has influenced our findings. We will also assess whether session attendance and trial completion differ by degree of participation in telehealth vs in-person intervention.

Plan to report pre-specified analyses

In our primary outcome paper, we will include results of our pre-specified subgroup analyses. We will name and report all pre-specified subgroups and their rationale for inclusion, the number of post-hoc HTE analyses, and outcomes analyzed. Reporting will include graphical forest plots, with estimated treatment group differences and confidence intervals for all subgroups, as well as tests of treatment-subgroup interactions.

Mediation analyses

We will perform mediation analyses on the Stage 1 main intervention effects and will follow the literature for development of statistical approaches for mediation analysis of SMART design adaptive interventions. We will test whether the common and specific factors in our causal pathway model significantly mediate the associations of treatment with patient outcomes. Let X be the assigned treatment, Y be 24-week outcomes, and M be the proposed mediator ( Fig.  6 ) . We will assess mediation using 3 regression equations: 1) Y  =  c 1 X  +  e 1 , assessing the association of treatment with 24-week outcomes; 2) M  =  aX  +  e 2 , assessing the association of treatment with the putative mediator; and 3) Y  =  c 2 X  +  bM  +  e 3 , associating the association of treatment with 24-week outcome when adjusting for the mediator (termed the “direct effect” of X with Y). Age and sex will be included as covariates in all three equations. We will test whether the estimator of the indirect effect ( a x b ) differs significantly from zero using bias-corrected bootstrapped confidence intervals on the estimator. A significant mediating effect suggests that the association of treatment effects (X) with trial outcomes (Y) in regression (1) is in part explained by treatment effects on mediating outcomes (M) in regression (3).

An external file that holds a picture, illustration, etc.
Object name is 12888_2021_3314_Fig6_HTML.jpg

Tests of Mediation. X = Treatment. M = Proposed Mediator. Y = 24-week outcomes. a = coefficient for correlation of X with M (M = aX). b = coefficient for correlation of M with Y (Y = bM). Y = c X = c’ + ab (the total effect of X on Y). M = a X (the effect of X on M). Y = c’ X + b M (direct effect of X on Y). a x b (mediation effect). Algebraic sign of a x b x c (type of mediation – complementary or suppressive)

We then will assess whether mediation is partial or complete, and whether it is complementary (i.e., the direct and mediated effects are in the same direction on the outcome Y and therefore have the same algebraic sign) or suppressive (the direct and mediated effects are in opposing directions on outcome Y and therefore have opposite signs). We will assess the significance of the direct effect (coefficient c 2 in eq. 3): statistical significance of this term will signify that mediation by M is partial [ 206 , 207 ]. We will also assess the algebraic sign of the product a*b*c to determine whether mediation is complementary or suppressive [ 206 , 207 ].

Finally, a complex mediation model that jointly estimates multiple mediating outcomes and incorporates the correlations among mediators and joint effects of mediators on trial outcomes will ultimately be used to test and understand the mechanisms of the CBT and medication interventions on anxiety outcomes. Moderator effects of the Contextualizing Factors will be tested as described under “Heterogeneity of Treatment Effects”.

Data safety monitoring board (DSMB)

Our data safety-monitoring plan is designed to ensure the safety of participants, the validity of the data collected, and the appropriate termination of the study in the event that significant benefits or risks are uncovered, or if it appears that the trial cannot be concluded successfully. We will convene a Data Safety Monitoring Board (DSMB) that is fully independent from the sponsor and competing interests. The DSMB will have the authority to recommend termination of the trial to the principal investigator and funding agency if it judges that a specific action is not in the best interest of study participants or that the conduct of study processes are unlikely to lead to sound scientific results. The board will also review subject burden levels associated with data collection tasks. The board will meet every six months to review study progress and adverse events. The PI and study staff will provide to the DSMB all patient data collection materials as well as a semi-annual summary report on patient outcome tracking. They also will immediately report to the DSMB any adverse patient or caregiver outcomes. The DSMB will be authorized to request any additional information or study materials it deems appropriate. All participants will be provided contact information for the DSMB to register complaints or other problems.

The DSMB will comprise three health care researchers, who are voting members, and one non-voting caregiver. They are:

  • Daniel Pine MD, Chief, Section on Development and Affective Neuroscience in the National Institute of Mental Health Intramural Research Program. He is an expert in the neurobiology and treatment of pediatric anxiety and mood disorders. He is chair of the DSMB.
  • Armando Andres Piña, Ph.D., Associate Professor in the Department of Psychology at Arizona State University. He is an expert in real-world psychosocial interventions for pediatric anxiety disorders.
  • Ravinder Anand, Ph.D. Vice President and Biostatistician, The Emmes Company, LLC, Rockville, MD. He is an expert clinical trials statistician. He is the clinical trials statistician for the Pediatric Trials Network.
  • Christine Norene Smith: non-voting community stakeholder

The charter for the DSMB is available from either the study PI or the DSMB chair upon request. No independent audits for the study are planned.

This study protocol was reviewed and approved under the SMART Institutional Review Board (IRB) mechanism, with Children’s Hospital Los Angeles the designated lead IRB. Parents will provide informed written consent for their child’s participation, and the child will provide informed written assent (see Supplementary Material for the consent and assent forms for the clinical trial and ancillary study), which will be obtained by trained study personnel. Post-trial care will be determined through routine clinical decision-making of the patient and treating clinician, and it will be paid through the patient’s usual insurance mechanisms. No compensation will be provided for those who suffer harm from trial participation. Any changes to the study protocol will need to be reviewed and approved by PCORI and the IRB. Changes will be documented within ClinicalTrials.gov and communicated to all relevant parties (e.g., investigators, DSMB, and trial participants).

Protection of privacy

We will minimize risks for confidentiality breaches in several ways. Research staff members who collect study data will do so only from one of our 9 clinical facilities or research sites. Study personnel will be required to sign confidentiality agreements and will be trained in the protection of human subjects. No data containing participant identifiers will be transferred outside CHLA or USC. In lieu of participant names or medical record numbers, participants will be identified only by a random subject ID number on both the raw and electronic study data. Crosswalks that link a participant’s name and medical record number to that person’s study number will be kept by the Principal Investigator in a locked file and will be destroyed at the end of the study. Medical service use data and participant names and contact information will be kept on a secured, non-networked computer (i.e., it will have no Internet access), which will be stored in a locked, secured office. The computer will be password-protected; only the study PI and system analyst will be permitted to access it.

Dissemination and implementation (DI)

The overarching aim of our efforts will be to disseminate and promote the appropriate uptake of our research findings and to facilitate the use of high-quality, relevant evidence by patients, caregivers, clinicians, insurers, and policy-makers in reaching better-informed decisions. The following provides examples of our DI work, which will be intensive and intentional, involving the active process of identifying target audiences and tailoring communication strategies to both a) increase awareness and understanding of the research findings, and b) motivate their appropriate use in policy, practice, and informing individual patient choices.

Project-specific DI repository and website

Our website will host: a) study materials and protocols (e.g., recruitment, retention, treatment, assessment), b) newsletters for study participants, participating sites, collaborating agencies and other interested stakeholders, c)Meeting agendas and minutes from the Parent and Youth Leadership Advisory Committees, d) study findings (full peer-reviewed reports plus brief summaries), e) lessons learned while conducting the research, f) brief videos and testimonials from investigators and Advisory Council members, and g) other DI products. This website will also serve as a clearinghouse for information for our study participants, who will also receive frequent study updates in English and Spanish.

Peer-reviewed journals and professional conferences

Study methods, findings, and supporting information will be published in highly-visible peer-reviewed journals, augmented by presentations at key professional conferences. Our Advisory Council members will be invited to co-author manuscripts and co-present.

Partnering with other key stakeholders

We will partner with other key organizations to disseminate study findings, such as the Anxiety and Depression Association of America (ADAA), which serves as both a professional membership organization and an organization that provides evidence-based information to the public; last year the ADAA had over 38 million visitors to its website. Others include NIMH Outreach Partners , a nationwide program charged with disseminating research findings and educational materials to the public, including to populations that experience mental health disparities. Venues will include weekly social media updates (#MentalHealthMondays) and monthly newsletters. These will link to the study findings and final report posted on the PCORI website per standard PCORI practices.

Multimedia Presentations and Displays These can be effective methods for sharing research results. We will build on existing relationships and use novel and innovative approaches to DI. For example, we will partner with Hollywood Health and Society (HHS), a program of USC’s Annenberg School of Communication that provides entertainment professionals with accurate and timely information for storylines on health. The SC CTSI currently works with HHS and television writers and producers to include storylines about clinical trials participation, which has resulted in an award-winning storyline on Grey’s Anatomy and storylines for Life Sentence , The Fosters, and Empire . With HHS, the SC CTSI also partnered with Life Noggin to develop a brief cartoon video that describes the importance of clinical trial participation ( https://youtu.be/BYZusIKpHIA ); the YouTube video received over 150,000 views in the first two days. We will use these same approaches to disseminate information about pediatric anxiety, the importance of evidence-based treatments, and study findings. In addition, we will partner with WeRise LA , which uses art to encourage youth to foster the empowerment of mental health and wellness. WeRise LA hosts an annual youth-driven art exhibit that encourages dialogue to reduce stigma surrounding mental health treatments.

Our community/our health Los Angeles (OC/OH-LA)

This is an approach we have used previously to facilitate a dialogue between researchers and members of the lay community about science and the implications of scientific findings. In addition, a consortium of CTSAs across the country have worked together to coordinate these events using simulcast technology. We will conduct at least one annual local and/or national OC/OH event that focuses on this research.

Uptake and adoption of evidence-based findings

We will draw upon the DI expertise of our team, collaborating partners, and stakeholder advisors to ensure early engagement and ongoing relevance of our research through continuous ties to key stakeholder groups likely to be interested and available to support our DI efforts. For example, the depression care initiative of our partner site, Kaiser Permanente Southern California (KPSC), has selected several high priority expansion populations, including adolescents with anxiety. Thus, the organization has already committed to creating an anxiety treatment program for adolescents 12 years and older. Our study offers a perfect opportunity to provide needed training, ongoing technical assistance, and oversight for the uptake and adoption of adolescent anxiety treatment, at KPSC and throughout the Kaiser healthcare system. We are confident that the program established with this study will be institutionalized within KPSC if findings warrant this and can be adapted to other local healthcare systems including AltaMed, Los Angeles County, and DHS.

Authorship eligibility guidelines

We will follow the guidelines of the International Committee of Medical Journal Editors for authorship eligibility [ 208 ]. All 4 of the following criteria must be met to be considered an author: (1) Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; (2) Drafting the work or revising it critically for important intellectual content; (3) Final approval of the version to be published; (4) Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All contributors who do not meet these 4 criteria for authorship will be listed in the ‘Acknowledgements’ section of the paper.

Acknowledgements

We thank Titilope Towolawi and Tawny Brown for their exceptional administrative support for this project. We are also grateful for key support from: Miriam Stone and Amy Evans at LifeStance Health California; Tim Eby-McKenzie and Jana Lord at Hathaway-Sycamores Children and Family Services; Ron Brown and Corina Casco at Children’s Bureau of Southern California; Rajan Sonik at AltaMed; and Mona Patel, Christina Yousif, Robert Jacobs, and Larry Yin at Children’s Hospital Los Angeles. Finally, we thank Dr. Philip C. Kendall for his consultation on adapting Coping Cat to our study design and for making training materials available for our use.

Abbreviations

Authors’ contributions.

(affiliations listed on title page) Specific roles within the project are listed below. All authors have read and approved the final manuscript. BSP: Principal Investigator & Director of Psychopharmacology Management. Conceived on the study, contributed to developing the study design, and drafted and edited this manuscript. AEW: Director of CBT Management. Conceived on the study, contributed to developing the study design, and drafted and edited this manuscript. JRW: Co-Director for CBT Management. Conceived on the study, contributed to developing the study design, and drafted and edited this manuscript. WJM: Director of Biostatistics. Conceived on the study, contributed to developing the study design, and drafted and edited this manuscript. MDK: Director of Stakeholder & Community Outreach. Conceived on the study, contributed to developing the study design, and drafted and edited this manuscript. RLF: Co-Director of Psychopharmacology Management. Conceived on the study, contributed to developing the study design, and drafted and edited this manuscript. BSM: Director of Implementation & Dissemination. Conceived on the study, contributed to developing the study design, and drafted and edited this manuscript. RB: Director of Database Management. Contributed to developing the study design; helped to draft and edit this manuscript. SP: Director of Clinical Trials Methodology. Contributed to developing the study design; helped to draft and edit this manuscript. GT: Director of Safety. Conceived on the study, contributed to developing the study design, and drafted and edited this manuscript. CK: Kaiser Permanent Site Director. Contributed to developing the study design; helped to draft and edit this manuscript. CA: Children’s Bureau of Southern California Site Director. Contributed to developing the study design; helped to draft and edit this manuscript. CS: Los Angeles County General Hospital Site Director. Contributed to developing the study design; helped to draft and edit this manuscript. MP: UCEDD Site Director. Contributed to developing the study design; helped to draft and edit this manuscript. BKA: Children’s Hospital Los Angeles Care Network Site Director. Contributed to developing the study design; helped to draft and edit this manuscript. CMA: Project Manager. Contributed to developing the study design; helped to draft and edit this manuscript. MP: Hathaway-Sycamores Child and Family Services Site Director. Contributed to developing the study design; helped to draft and edit this manuscript. SNM: AltaMed Site Director. Contributed to developing the study design; helped to draft and edit this manuscript. BOH: UCEDD Site Co-Director. Contributed to developing the study design; helped to draft and edit this manuscript. SHC: Lifestance Health Site Director. Contributed to developing the study design; helped to draft and edit this manuscript. RW: Co-Director for CBT Management. Conceived on the study, contributed to developing the study design, and drafted and edited this manuscript.

This work was supported through a Patient-Centered Outcomes Research Institute (PCORI) Project Program Award (PEDS-2019C1–16008) and by funding from Children’s Hospital Los Angeles. Disclaimer All statements in this report, including its findings and conclusions, are solely those of the authors and do not necessarily represent the views of PCORI, its Board of Governors, or Methodology Committee. PCORI conducted a rigorous peer review of this study protocol.

Availability of data and materials

Declarations.

This study protocol was reviewed and approved under the SMART Institutional Review Board (IRB) mechanism, with Children’s Hospital Los Angeles the designated lead IRB. Parents will provide informed written consent for their child’s participation, and the child will provide informed written assent for both the clinical trial and the ancillary study, which will be obtained by trained study personnel.

Not applicable.

Dr. Peterson has received investigator-initiated support from Eli Lilly and Pfizer and has served as a paid consultant to Shire Human Genetic Therapies. He is also President of Evolve Psychiatry Professional Corporation and a paid advisor to Evolve Residential Treatment Centers. Dr. Findling receives or has received research support, acted as a consultant for, and/or has received honoraria from Acadia, Adamas, Aevi, Afecta, Akili, Alkermes, Allergan, American Academy of Child & Adolescent Psychiatry, American Psychiatric Press, Arbor, Axsome, Daiichi-Sankyo, Emelex, Gedeon Richter, Genentech, Idorsia, Intra-Cellular Therapies, Kempharm, Luminopia, Lundbeck, MedAvante-ProPhase, Merck, MJH Life Sciences,NIH, Neurim, Otsuka, PaxMedica, PCORI, Pfizer, Physicians Postgraduate Press, Q BioMed, Receptor Life Sciences, Roche, Sage, Signant Health, Sunovion, Supernus Pharmaceuticals, Syneos, Syneurx, Takeda, Teva, Tris, and Validus. All other authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

IMAGES

  1. 3 Flow diagram of a Sequential Multiple Assignment Randomised Trial

    multiple assignment randomized trial

  2. The randomized trial is a gold standard for evaluating the efficacy of

    multiple assignment randomized trial

  3. Sequential Multiple Assignment Randomized Trial & Multiphase

    multiple assignment randomized trial

  4. Sequential Multiple Assignment Randomization Trial (SMART) design and

    multiple assignment randomized trial

  5. Flow chart of the feasibility sequential multiple assignment randomized

    multiple assignment randomized trial

  6. The SMART (Sequential Multiple Assignment Randomized Trial) recruitment

    multiple assignment randomized trial

VIDEO

  1. Nov 4 End of semester review

  2. Polygram Records Sdn Bhd V The Search

  3. Randomized Queues and Deques Assignment Introduction

  4. Video Assignment

  5. Analyze Any Websites Domain, Article or Journal

  6. 36 Cluster Randomized Trial

COMMENTS

  1. Sequential, Multiple Assignment, Randomized Trial Designs

    This JAMA Guide to Statistics and Methods explains sequential, multiple assignment, randomized trial (SMART) study designs, in which some or all participants are randomized at 2 or more decision points depending on the participant's response to prior treatment.

  2. Use of Sequential Multiple Assignment Randomized Trials ...

    Sequential multiple assignments randomized trials (SMARTs) are a type of experimental design where patients may be randomised multiple times according to pre-specified decision rules. The present ...

  3. The Sequential Multiple Assignment Randomized Trial for Controlling

    The sequential multiple assignment randomized trial (SMART) is an experimental design consisting of multiple randomization stages. 2 This type of design serves as a promising tool to address scientific questions about constructing effective adaptive interventions for controlling infectious diseases.

  4. Sequential, Multiple Assignment, Randomized Trials (SMART)

    A sequential, multiple assignment, randomized trial (SMART) is a type of multistage randomized trial design that aims to answer critical questions in the development of DTRs, such as those described above. In a SMART, all participants move through multiple stages of treatment. At each stage, participants may be randomized to a set of feasible ...

  5. Sequential, Multiple Assignment, Randomized Trial Designs in Immuno

    The sequential, multiple assignment, randomized trial (SMART; refs. 16, 17) is a multistage trial that is designed to develop and investigate treatment pathways. SMART designs can investigate delayed effects as well as treatment synergies and antagonisms, and provide robust evidence about the timing, sequences, and combinations of ...

  6. Sequential multiple assignment randomized trial studies should report

    In a sequential multiple assignment randomised trial (SMART) design, some or all patients can be randomised more than once, especially nonresponders to the first stage treatment option. One can answer more research questions using a SMART design compared to a classical RCT design.

  7. Sequential, Multiple Assignment, Randomized Trial Designs

    A SMART is a type of multi-stage, factorial, randomized trial, in which some or all participants are randomized at two or more decision points. Whether a patient is randomized at the second or a later decision point, and the available treatment options, may depend on the patient's response to prior treatment. In a prototypical SMART ( Figure ...

  8. Use of Sequential Multiple Assignment Randomized Trials (SMARTs) in

    Sequential multiple assignments randomized trials (SMARTs) are a type of experimental design where patients may be randomised multiple times according to pre-specified decision rules. The present work investigates the state-of-the-art of SMART designs in oncology, focusing on the discrepancy between the available methodological approaches in ...

  9. PDF Sequential, Multiple Assignment, Randomized Trials (SMART)

    One type of clinical trial design that is useful for answering such questions is the sequential, multiple assignment, randomized trial, or SMART (Lavori and Dawson 2004; Murphy 2005; Collins et al. 2014). Relative to standard multiarm randomized trials, the SMART is unique in that it involves multiple stages of randomization:

  10. A framework for testing non-inferiority in a three-arm, sequential

    Sequential multiple assignment randomized trial design is becoming increasingly used in the field of precision medicine. This design allows comparisons of sequences of adaptive interventions tailored to the individual patient. Superiority testing is usually the initial goal in order to determine which embedded adaptive intervention yields the ...

  11. Sequential Multiple Assignment Randomized Trial (SMART)

    A sequential multiple assignment randomized trial (SMART) is an experimental design that scientists can use to develop high-quality dynamic treatment regimens (DTRs), a prespecified treatment plan in which the type(s) or dosage/intensity of an intervention, the delivery of the intervention, or the monitoring schedule is repeatedly adjusted in response to new information collected about the ...

  12. Adaptive Strategies for Retention in Care among Persons Living with HIV

    In a sequential multiple assignment randomized trial in Kenya, we randomly assigned adults initiating HIV treatment to standard of care (SOC), Short Message Service (SMS) messages, or conditional cash transfers (CCT). Those with retention lapse (missed a clinic visit by ≥14 days) were randomly assigned again to standard-of-care outreach (SOC ...

  13. PDF Use of Sequential Multiple Assignment Randomized Trials ...

    multiple assignment randomized trials (SMARTs) [2]. SMART designs randomise patients at each decision point considering information collected on the patient so far. They are of growing interest in the

  14. Which trial do we need? A sequential multiple assignment randomized

    The SMART design is a multistage randomized trial design in which patients move through multiple stages, in this case the first, second and third CDI episodes. Each episode corresponds to a decision stage: when a study participant experiences a recurrence within 60 days after the end of treatment (EOT)

  15. The Multiphase Optimization Strategy (MOST) and the Sequential Multiple

    The other is the Sequential Multiple Assignment Randomized Trial (SMART) 3 for building adaptive interventions. We propose that these new methods, although relatively untried at this writing, are eminently practical and hold much potential for e-health research. By using these methods, it is possible to produce more potent interventions which ...

  16. PDF Sequential, Multiple Assignment, Randomized Trials (SMART)

    Primary Aim 1 Examples. Compare initial intervention options. Compare second-stage options among slow-responders. Compare embedded adaptive interventions. First, let's define what we mean by "embedded. Start with Then, at. If response status = responder Then, stage 2 intervention= {add Else if response Then, stage.

  17. A Sequential Adaptive Intervention Strategy Targeting ...

    Conclusions and relevance: In this sequential multiple assignment randomized trial, transition rates to psychosis were moderate, and remission rates were lower than expected, partly reflecting the ambitious criteria set and challenges with real-world treatment fidelity and adherence. While all groups showed mild to moderate functional and ...

  18. Designing a Pilot Sequential Multiple Assignment Randomized Trial for

    Sequential multiple assignment randomized trials (SMART) have been developed to address the sequencing questions that arise in the development of ATSs, but SMARTs are relatively new in clinical research. This article provides an introduction to ATSs and SMART designs. This article also discusses the design of SMART pilot studies to address ...

  19. Trial of Early Minimally Invasive Removal of Intracerebral Hemorrhage

    ENRICH was a prospective, multicenter, adaptive, randomized trial with end-point adjudication. 9 The trial design and conduct were overseen by a multidisciplinary team of academic investigators at ...

  20. Increasing Goals of Care Conversations in Primary Care: Study ...

    Strategies to improve implementation of outpatient, primary care goals of care conversations are needed.Methods: We plan a cluster randomized (clinician-level) sequential, multiple assignment randomized trial to evaluate the effectiveness of patient implementation strategies on the outcome of goals of care conversation documentation when ...

  21. Prostate-Specific Antigen Screening and 15-Year Prostate Cancer

    The European Randomized Study of Prostate Cancer Screening (ERSPC) randomized clinical trial (N = 162 243), which combined data from 7 centers with different protocols and screening strategies, reported that PSA screening conducted every 2 to 4 years (mean of 1.4 tests per participant) reduced prostate cancer mortality after 16 years (RR, 0.80 ...

  22. Study protocol for a sequential multiple assignment randomized trial

    A sequential multiple assignment randomized trial (SMART) provides an effective way to test differential response to alternative treatment options that make up adaptive interventions. In a SMART design, researchers start with an initial intervention, and then change/augment it over time based on response to the initial treatment . The SMART ...

  23. Early Olezarsen Results Show 50% Reduction in Triglycerides

    The results of this randomized trial, called BRIDGE-TIMI 73a, are consistent with other evidence that inhibiting expression of ApoC-III lowers the levels of TGs and other lipid subfractions to a ...

  24. A Sequential Multiple Assignment Randomized Trial (SMART) study of

    This is a single-blind Sequential Multiple Assignment Randomized Trial (SMART) of 24 weeks duration with two levels of randomization, one in each of two 12-week stages. In Stage 1, children will be randomized to fluoxetine or Coping Cat Cognitive Behavioral Therapy (CBT). In Stage 2, remitters will continue maintenance-level therapy with the ...