Jump to ContentJump to Main Navigation
Randomized Controlled TrialsDesign and Implementation for Community-Based Psychosocial Interventions$

Phyllis Solomon, Mary M. Cavanaugh, and Jeffrey Draine

Print publication date: 2009

Print ISBN-13: 9780195333190

Published to Oxford Scholarship Online: May 2009

DOI: 10.1093/acprof:oso/9780195333190.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. Subscriber: null; date: 10 May 2021

Planning the Randomized Controlled Trial

Planning the Randomized Controlled Trial

Chapter:
(p.45) 3 Planning the Randomized Controlled Trial
Source:
Randomized Controlled Trials
Author(s):

Phyllis Solomon

Mary M. Cavanaugh

Jeffrey Draine

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780195333190.003.0003

Abstract and Keywords

Chapter 3 discusses the requisite research necessary for designing and conducting a full-scale RCT. Preliminary efforts include assessing and negotiating within possible settings, assessing the likelihood of agency/provider cooperation and level of engagement, gauging the capability of providers in executing the interventions, determining the availability of potential study participants, and tracking the flow of participants in and out of a preliminary (pilot) research study. Other topics include: developing recruitment procedures and ways to plan for the engagement and retention of study participants, with a particular emphasis on culturally-competent recruitment. Consideration is given to the identification, modification, and development of intervention manuals, workbooks, and educational curriculum as well as methods for fidelity assessment. Particular attention is focused on planning for the design of the RCT, as well as on the importance of ensuring the feasibility of the RCT through essential pilot work and pipeline assessment.

Keywords:   intervention manuals, fidelity assessment, pilot studies, pipeline assessment, culturally-competent recruitment

With the recent focus on evidence-based practice (EBP), it may seem to some that the only valued research method is the randomized controlled (or clinical) trial (RCT). This is far from true, even among the most ardent randomized trial intervention researchers (count these authors among those who do not believe that RCTs are the only valued research). The strongest RCTs build upon research conducted both by RCT investigators and other investigators in their field. Initial research defines a need, a social problem, or an emerging gap in clinical services; this research begins to focus interest on an area of intervention. In addition, such research intersects with theoretical work that builds a conceptual foundation for what types of interventions may reliably lead to a change in behavior or social conditions, especially as the interventions apply to the identified problem. Development of a specific model of intervention is built upon this research.

As the research and conceptual literature develop and expand, core ideas about the intervention area emerge. This iterative process develops scientific and theoretical knowledge on which an RCT builds. The RCT provides one context for learning more about an area of social work practice. Furthermore, research offers an opportunity to examine your (p.46) practice from multiple perspectives, not simply from the perspective of “does it work—or not.” Therefore, although this text focuses on RCTs, it presupposes that the RCT typically follows other preliminary research, and, in fact, rests upon the quality of that work.

This chapter discusses the requisite research necessary for designing and conducting a full-scale RCT. Preliminary efforts include assessing and negotiating with possible settings, assessing the likelihood of agency/provider cooperation and level of engagement, gauging the capability of providers in executing the interventions, determining the availability of potential study participants, and tracking the flow of participants in and out of a preliminary (pilot) research study. Other topics include developing recruitment procedures and ways to plan for the engagement and retention of study participants, with a particular emphasis on culturally competent procedures. Consideration is given to the identification, modification, and development of intervention manuals, workbooks, and educational curriculum. Particular attention is focused on planning for the design of the RCT, as well as on the importance of ensuring the feasibility of the RCT through essential pilot work.

Determining Whether to Undertake a Randomized Controlled Trial

The question of whether to conduct an RCT contains several elements. The intervention must be adequately developed, so that researchers can discern clearly into what type of intervention participants will be randomized and who will be randomized. Community-based psychosocial interventions are set within a service context. Defining what is being addressed by the intervention, for whom, and under what context requires conceptual clarity and theoretical insight, based on sound, existing empirical research. Conducting a community-based psychosocial RCT is also a question of finding a policy moment when a field of practice may be ripe for this level of development.

An example of the intersection of policy and research is the introduction of supported employment and supported housing. Several generations of programs for people with psychiatric disabilities focused on complex, (p.47) laborious interventions that were specifically designed to prepare individuals for work and housing. Recently, ground-breaking randomized trials in employment (Drake, McHugo, Becker, et al., 1996; Drake, McHugo, Bebout, et al., 1999) and housing (Goldfinger, Schutt, Tolomiczenko, et al., 1999; Tsemberis, Gulcur, & Nakae, 2004) have demonstrated that having housing or employment goals as the primary focus may be more effective than a long-term period of preparation for those goals. These RCTs broke new ground by providing rigorous empirical evidence that individuals with mental illness were better served by these seemingly radical intervention approaches. These trials moved the field toward new questions and challenged the perceived wisdom of existing policies. Unresolved questions still exist regarding these interventions; therefore, research is continuing through RCTs and other types of research. Widespread implementation of these interventions has still not occurred. However, RCTs were instrumental in moving these programs forward, arguably achieving a higher quality of life for those with a psychiatric disability.

The driving force behind the plausibility of an RCT is a compelling research question and a substantive experimental intervention/condition. The experimental condition should respond to a relevant policy and/or service design question. For example, what may be more effective than the status quo, and in what ways? For research to be compelling, the experimental intervention needs to be adequately developed to the point at which fidelity can be assessed by manualized standards. The argument for the experimental intervention must be set in comparison to treatment-as-usual (TAU), in a way that is grounded in equipoise (a stance in which one genuinely does not know the answer to the research question) and that has the potential to move a field forward, through improving the lives of vulnerable populations or in solving difficult service delivery problems. If a theoretical argument cannot be made for the experimental intervention demonstrating a high probability for effectiveness, then there is no compelling reason for an RCT.

Selecting a Site

Following from a compelling research question, a service context needs to offer a reasonable and ethical opportunity for the trial. Sometimes a (p.48) researcher starts a planning process for an RCT by collaborating with a setting in which he has a connection, so that the “selection” of a setting may not appear to be an issue. However, even in these cases, a variety of unknowns must be assessed. The investigator needs to assess the motivation of the site for instituting the intervention and for assisting in subject recruitment and maintaining clients in the RCT. The onus of explaining the realities of conducting an RCT is the responsibility of the researcher, who then makes a determination of the willingness and ability of the site to undertake the effort. It is better to decide not to go forward with an RCT in a particular setting than to have to withdraw later after numerous resources have been invested. At the same time, the investigator must be open-minded and collaborative with agency personnel in designing the RCT, as they understand best how their setting operates.

The research setting must have a pipeline of potential clients available (and willing) for services. If the setting’s pipeline does not yield sufficient clients to be randomized adequately, questions must be explored as to how that system will adopt an additional intervention with a limited client population.

In addition, the setting must be committed and prepared to support the research intervention, including any medical and legal coverage that may be necessitated by the particular intervention. Providers need to be capable of delivering the interventions. If they are not qualified to execute the interventions, the researcher will have to consider implications regarding hiring of qualified staff. The setting has to be prepared to allocate appropriate space for the interventions; in some instances, separate locations are necessary to ensure no interaction between providers and recipients in the various RCT conditions. Furthermore, investigators must make sure that agencies understand the financial commitment involved in the experimental intervention. Will there be sufficient resources to support the intervention, not only in terms of direct cost, but also considering administrative costs, space, and clinical supervision? If not, where else might one seek financing for the RCT? Are local or state governmental entities willing to invest in the RCT? If the intervention proves to be effective, will the agency or organization have the capacity to sustain the intervention over time (e.g., financially)? In one study by the (p.49) authors, a consumer case management program was continued beyond the study period because the host agency could mount an effort to build the administrative infrastructure to bill for the intervention service. Other agencies may have the capacity to tap financial resources, as well as have the political capital to assure the continuity of an effective service beyond the RCT. Some funding sources may require such an assessment as part of the criteria for financially supporting the proposed RCT. This requires carefully considering the cost involved in delivering the intervention.

The decision to move forward with designing an RCT is a complex process involving concerns about the state of science, policy, and services in a particular area of expertise, as well as pragmatic and feasibility issues. Resolving these matters will further shape the intervention in interaction with the service context. Before moving forward with the design of the RCT, one must have the necessary commitments from those who have a realistic understanding of what is involved in undertaking an RCT.

Negotiating with Settings

Since many community-based psychosocial RCTs frequently occur within an agency context, which has responsibilities to funding sources, clients, and to the public, this situation requires negotiation at a number of administrative levels. One of the first considerations in negotiating with research settings is determining whether the researcher begins in a service setting with the top administrators or closer to the front-line workers. Frequently, the location of researcher’s connections determines where to start. For example, if the researcher knows the Commissioner of the Department of Human Services well enough to ask her out for lunch, then that connection should certainly be kept in mind. If a researcher begins with such a top system administrator, the weight of this administrator’s approval for the RCT can be effective in gaining cooperation throughout the entire service system, as this endorsement is an implied directive to cooperate with the researcher (Solomon & Draine, 2006).

However, if the researcher is not careful, this “insider-advantage” can be a double-edged sword that can cause difficulty further down the planning and implementation path. Many service systems have levels of (p.50) expertise and authority. These levels of authority are both formal and informal. Often, the informal lines of authority are the most helpful—and the most formidable. Although the executive is empowered to hire, fire, begin, and end programs, there is also the authority of the seasoned veteran social worker at the front lines, who can tell you “how things are really done” and, even more to the point, whether the RCT has a chance of being implemented at all. The most likely thing one may be told are the numerous reasons why the RCT is not feasible, and some of those reasons may likely be quite valid—and ones that no one “higher up” was in a position to discern.

In the authors’ experience, a far more effective strategy has been to begin the process from the “bottom-up,” gaining and building support for the RCT idea up through the ranks of an organization, while respecting the authority of administrators. In actuality, the most feasible approach is the combination of a “top-down/bottom-up” approach. Respect for administrators requires that you seek their permission to become involved with their staff. The best first step is not to go in with “I want to do a randomized trial. Can I do it here?” A more effective first step is, “I have some ideas about studying the most effective ways to serve your client population. But first, I’d like to talk with you about my idea, and then spend some time with your service providers and program directors and find out more about how they do their job.” Of course, one can only accomplish this with integrity if one actually intends to include input from the front-line staff in developing the intervention. Any intervention must fit within the service context. In the process of exploring these topics with the agency staff, the researcher will gather essential information about who are the most informative and cooperative staff members, the way the services are delivered, the pipeline of clients, and the likely ways an intervention will (or will not) fit into the agency. Thus, designing the RCT and negotiating with the potential host setting is an iterative process that interacts with various service elements.

In working with any agency setting, the researcher must be honest about how the RCT will likely impact the agency. Researchers are often tempted to negotiate with a setting by claiming that “we will do all the work, and you won’t have to do anything. You’ll hardly know we’re here.” (p.51) This is partially motivated by a sense of guilt for imposing on busy agency staff, who are perceived as already overburdened. The problem with this approach is twofold: (1) It is not accurate. The RCT will impact agency time and resources in ways a researcher is not in a position to understand because the researcher is not aware of all the administrative and service demands of the organization. There will almost certainly be demands on the agency in terms of its internal requirements for record keeping, in medical and legal responsibility and liability, and staff resources. And: (2) It is important to draw the agency in as a collaborative partner, rather than to promise that no burden will be placed on them. Taking a collaborative partner stance, a researcher will gain from the agency an investment in the success of the project, and hopefully, flexibility in making the changes and adjustments as the RCT goes forward that may be needed to assure research rigor, as well as clinically appropriate service provision. This negotiating process is made easier when the funding for the RCT is adequate to cover the experimental service. However, even in cases where relationships are not fostered with the promise of compensation, it is to the benefit of all parties involved to understand the demands that will be made. Therefore, buy-in and honesty about work demands are far more important than promises of little or no impact on day-to-day operations.

The importance of establishing clear expectations also applies to understanding the different roles of the researcher and the agency. The researcher is responsible for the science of the RCT. The agency is assuming medical and legal authority, as well as liability risk for the intervention and to the clients, and thus agency staff has responsibility for the integrity of these services. Consequently, there may be varying expectations that can result in conflict. It is usually helpful to begin the negotiation process with a short concept paper that delineates the scope of the study, the importance of the study, what is being asked of the agency, and what the advantages are for the agency, specifically articulating the benefits in ways that best match the agency’s interests/mission. The concept paper should be brief (one or two pages) and in outline form. Eventually, a letter of agreement needs to be negotiated, so that all parties know what is expected of them in terms of both the intervention and the research (p.52)

Table 3.1. Principles for Working with Providers and Consumers in Designing and Conducting Intervention Research: “REAL SCORE”

  • Respect for providers and consumers

  • Establish credibility

  • Acknowledge strengths

  • Low burden

  • Shared ownership – reciprocity

  • Collaborative relationship

  • Offer incentives – be responsive and appreciative of providers

  • Recognize environmental constraints – be flexible

  • Ensure trust – be sure providers feel heard

being implemented. A draft agreement documented by memos or e-mails will at least enable the researcher to proceed in designing the RCT. Principles of working with agencies to negotiate, design, and conduct the RCT form the acronym “REAL SCORE” (Table 3.1). If these principles are not adhered to, it is likely that negotiations will fall apart before or during the design of the RCT, or worse yet, in the conduct of the RCT.

Pilot Studies

Conducting pilot studies that are well-conceived with clear aims and objectives will lead to higher-quality RCTs (Lancaster, Dodd, & Williamson, 2004). Generally, external pilot studies are stand-alone pieces of research planned and carried out prior to the design of the RCT, as opposed to internal pilot studies, which are part of the primary RCT (Lancaster, et al., 2004, p. 307). Pilot studies enable the researcher to assess the worthiness, practicality, feasibility, and acceptability of the intervention, recruitment, retention, and data collection procedures. In addition, pilot studies may help to determine what the most appropriate outcome measures are. One of the main reasons for conducting a pilot study is to obtain initial data for calculating the primary/future study’s sample size. Although data for estimating sample size is the reason promoted by many researchers and funding sources for pilot work, Kraemer and colleagues (p.53) (2006) caution against using the effect size from an inadequately powered pilot study to make decisions for a larger study. However, other researchers and statisticians believe that some data are better than no data. Analyses of data from pilot studies largely use descriptive statistics, as the sample size is usually too small for hypothesis testing (Lancaster, et al., 2004). If hypotheses are tested using pilot data, it should be interpreted with caution. Generally, there is a need for some positive, empirical evidence from pilot work to warrant proceeding with a full-scale RCT. Pilot work is an essential component of specifying those elements of the experimental intervention necessary for developing manuals and fidelity assessments.

Pipeline Assessment of Sample Recruitment

Frequently, investigators of RCTs begin with optimistic estimates of the number of potentially eligible participants and the subsequent number who will likely voluntarily agree to consent. These optimistic estimates often do not consider the reduction in numbers that will come from the interaction of eligibility criteria and the operational details of the referral source. Thus, a problem in achieving a proposed sample size in efficacy trials is a commonly reported downfall in implementation.

Inadequate sample sizes may also result from practitioners’ treatment preferences in making referrals and potential participants’ decisions to refuse to consent (McDonald, Knight, Campbell, et al., 2006; Rendell & Licht, 2007). McDonald and her associates (2006) found that efficacy trials that conducted a pilot study generally made changes in recruitment strategies, design, inclusion criteria, number of sites, and written trial material. However, these investigators were unable to determine what specific features of the trial resulted in successful recruitment. Another recent review of recruitment plans identified a few effective strategies that appeared promising. Those that are most relevant to effectiveness RCTs were monetary incentives and employing culturally responsive strategies in recruitment and retention (Watson & Torgerson, 2006).

Recruitment problems are not unique to efficacy trials. These problems are endemic to all research using primary data collection. It is (p.54) therefore important to undertake certain activities prior to designing the RCT in order to address issues of sample availability and recruitment. First, it is essential to be familiar with the recruitment site and to determine where and how potential participants can be recruited. Boruch (1997) refers to this as “scouting research.” Spending time in the service environment and interacting with the providers, in order to understand exactly how the setting operates, is key to being able to design a successful recruitment process. Time allocation provides an opportunity for the researcher to observe how clients enter services (that is, complete the intake process; wait for service appointments) and how programs actually function. Depending on the nature of the proposed RCT, this information determines where and how recruitment should be considered.

Specifically, undertaking a pipeline study that “directs attention to how, why, and when individuals may be included in the experiment or excluded, and to the number of individuals who may enter or exit the study at any given point” will be helpful in determining whether the study can feasibly be conducted at the proposed site, or whether other sites need to be considered, or perhaps abandoning the project. Once a site is assessed as feasible, the process of designing how to enroll eligible individuals into the trial begins (Boruch, 1997, p. 88). Most helpful pipeline studies include qualitative observations of the processing of clients into the service program (Boruch, 1997). In addition to spending time observing at the recruitment site, it is useful to conduct initial pilot work, to try to recruit a small number of the target population to determine how many of the potential target population actually meet the study eligibility criteria and how many of those who are eligible are actually willing to enroll in the RCT. Questioning those who refuse as to why they are unwilling to participate offers important information in designing the recruitment process, intervention, and research protocols, as well as site selection. Pilot work can provide some estimate of how large a sample pool is available and willing to participate in the study. Furthermore, it will produce estimates of how much time may be needed to recruit the RCT sample.

What may seem like a large pool from which to draw a sample may in reality not be very large at all in terms of numbers of individuals who (p.55) are both eligible and willing to participate in the research. An excellent example of delineating the size of the target population and the final number of enrollees for an RCT is the human immunodeficiency virus (HIV)/ sexually transmitted infections (STI) prevention trial for African American and Latino heterosexual couples undertaken by a team from the Social Intervention Group at Columbia University School of Social Work. For this study, the investigators screened 2,416 women and found only 16% (N = 388) who were eligible and just over half of eligible women enrolled with their male partners (N = 217) (Witte, El-Bassel, Gilbert, et al., 2004). To recruit such a population is not an easy task, especially given the complexity of the issue being studied and the recruitment of dyads. Part of Witte and colleagues’ success can be linked to the research team working in the host setting for a number of years; therefore, having familiarity with the operations of the clinic and its staff. But more to their credit, they employed carefully constructed culturally competent recruitment strategies developed from their initial pilot work.

Developing Culturally Competent Recruitment Strategies

Investigators often justify the use of homogeneous research samples in RCTs to reduce possible confounds and for practical reasons, such as lack of literacy or English language proficiency, as well as the complexity of including individuals with varying cultural backgrounds. However, for community-based psychosocial RCTs, minority populations are often the target populations of interest or certainly a major portion of them. Given the characteristics of community-based RCT samples, there is a need to conduct pilot work to ensure that the final sample, in the larger/future RCT, is not biased and unrepresentative of the target populations

Four primary recruitment barriers to minority participation in RCTs have been documented: (1) Individual barriers—that is, believing study procedures are invasive, or being fearful of research; (2) research barriers, including being aware of past research abuses; (3) sociocultural barriers, including racial and ethnic discrimination, and suspicions of the intentions of health care and other systems where research is conducted; and (p.56) (4) economic barriers, including the ability to access health care or lacking available funds for transportation to/from the research site (Witte et al., 2004). Limited formal evaluations of recruitment strategies have resulted in no empirically supported approaches (Oakley, Wiggins, Turner, et al., 2003). However, based on a recent review of minority RCT recruitment procedures, four strategic targets have been suggested (Swanson & Ward, 1995 as cited in Witte et al., 2004). The Columbia Social Work investigators, mentioned previously, incorporated these suggestions in their recruitment protocol for their RCT. Their approach is an excellent example of a thoughtfully crafted culturally competent recruitment process that resulted in successfully achieving an adequate sample. The development of their recruitment strategy began with qualitative pilot work that included input from individuals who represented potential participants (see Case Example below). This case example has a number of suggested approaches that are relevant to many RCTs that may be conducted by social workers.

Pilot Testing for Retention of Participants

When planning an RCT, investigators commonly pilot recruitment strategies, but frequently neglect piloting for retention of participants (that is, the continuing involvement of participants through the duration of the study; Davis, Broome, & Cox, 2002). In the planning stages of an RCT, procedures for retaining participants in both the service intervention and the outcome data collection need to be developed. Attrition is frequently a challenge in RCTs, as outcomes are collected during the intervention, at termination, and at the various follow-up points. RCTs conducted by social workers are further complicated by the nature of the target populations, who tend to have unstable housing arrangements and/or (for a myriad of complex reasons) involvement in activities that may result in not wanting to be located.

Pilot work regarding the intervention is crucial, involving feedback on the receptivity to the experimental and control intervention. Clients liking the intervention, seeing it as beneficial, and believing that it is the most desirable service are all essential ingredients to treatment retention (p.57) (p.58) (p.59) (Good & Schuler, 1997). Using pilot testing to improve control conditions have been shown to result in higher retention (Davis, Broome, & Cox, 2002). Also, involving host site providers in the intervention development assists in staff buy-in, which may translate into their encouraging participants to stay involved in the intervention. To prevent biased attrition, emphasis must be given to research tracking strategies employed to retain participants in the data collection efforts (to be discussed in Chapter 6). Unfortunately, what is known regarding recruitment and retention strategies of community-based, in-person studies cannot easily be translated to the new venue of online RCTs (Bull, Lloyd, Rietmeijer, & McFarlane, 2004). Consequently, pilot work in this area is even more imperative.

(p.60) Defining, Identifying, and Developing Community-based Psychosocial Intervention Manuals

Definition and Use of Manuals

The expectation with RCTs is that the interventions will be delivered in a standardized manner (that is, essentially all providers of the interventions will be engaging in the same processes and practices with each of the subjects). The method for achieving this uniformity and repeatability of the intervention in efficacy RCTs is through the treatment manual. A treatment manual details specifically the experimental treatment and provides careful guidelines for treatment implementation (Carroll & Nuro, 1997; Carroll & Rounsaville, 2008). The treatment manual specifies the intervention, provides standards for evaluating adherence to the intervention, offers guidance for training, provides quality assurance and monitoring standards, facilitates replication, and stimulates dissemination and transfer of effective interventions (Carroll & Rounsaville, 2008). Treatment manuals have become a virtual requirement of all efficacy trials, as they are the means to the operationalization of the independent variable. They are increasingly expected in effectiveness RCTs as well. However, in some instances, the community-based psychosocial interventions may entail services or programs as opposed to psychotherapies. A program manual delineates the core elements and structures of the program, as well as the various roles of the different providers.

Traditionally, treatment manuals describe a single program and often include brief literature reviews, general guidelines for establishing a therapeutic relationship (e.g., tips for working with groups), descriptions of specific techniques and content (sometimes in the form of a curriculum), suggestions for structurally sequencing activities and strategies for dealing with special problems, implementation issues, and termination (Fraser, 2003). Manuals outline specific details regarding the core intervention: the when and how of delivering the intervention, and elements that are not parts of the prescribed treatment intervention (Miklowitz & Hooley, 1998).

(p.61) Program or practice manuals deal not only with practitioner behaviors, “but also structural aspects of a program (e.g., caseload size and staff qualifications), location of services (e.g., in community settings) and ‘behind the scenes’ activities (e.g., integration of treatment and rehabilitation).” Bond and colleagues (2000) assert that practice manuals are conceptualized at a more macro level. Many community-based psychosocial interventions are difficult to manualize because they occur in multiple settings, with a diversity of providers and recipients, and involve a range of activities that go beyond one-to-one psychotherapy or counseling (Bond, et al., 2000).

The literature contains guidance on the development of treatment manuals; however, the information is not geared to community-based psychosocial interventions. Adapting the work of Carroll and colleagues (Carroll & Nuro, 1997; Carroll & Rousaville, 2008), the outline below provides a framework for a service or program manual. In some cases, the intervention may have an operation manual, as with an educational curriculum.

  1. 1. Overview and description of the intervention—Service, program, curriculum, and rationale. Description of approach, theoretical rationale for the intervention; review of the empirical research underpinning and supporting the effectiveness of the intervention.

  2. 2. Conception of the problem or condition. Forces or factors that led to the development of the condition; research and/or theory indicating factors or processes leading to change or improvement; what is/are hypothesized to be the agent(s) of change; conceptual framework for understanding the problem or condition; procedures for assessing the problem or condition, including any standardized measures.

  3. 3. Defining the target population. Delineate characteristics and criteria for those whom the intervention is (and is not) designed.

  4. 4. Intervention—Service, program, and curriculum goals. Specification of primary goals of the interventions; procedures and strategies for determining specific goals for clients.

  5. (p.62)
  6. 5. Contrast to other approaches. Indications of how the service/program differs from other similar approaches; specification of approaches for the problem or condition that are most dissimilar to this approach.

  7. 6. Defining and specifying the intervention. Specify unique and essential elements; specify recommended processes and strategies; which processes and strategies are prohibited and which may be harmful or counterproductive.

  8. 7. Client–provider relationship. Define role of various providers; delineate importance of provider–client relationship to outcomes; specify strategies to be used in developing desired relationship; specify strategies to address weak relationship.

  9. 8. Format/structure of the intervention. Specification of structural elements— frequency of meetings of providers, hours of operations of service, staff-to-client ratios; delineate number, types, and qualifications of providers—individual, group, or mixed format(s); if group is open or closed format; length of intervention; frequency and intensity of contact(s); specify content and sequencing of content, as well as degree of flexibility of presentation; specify content for educational sessions, including a mix of didactic material and experiential exercises; delineate any relevant behavioral exercises and homework assignments; may include forms to guide the work of providers and clients and informational handouts. The information for this domain may vary greatly contingent on the nature of the intervention.

  10. 9. Standards of care. Guidelines and procedures for assessing progress; strategies and procedures for responding to lack of progress or deterioration; procedures for resolving contradictions of progress and problems from perspectives of clients, families, and providers; assessing and responding to crises.

  11. 10. Strategies for dealing with special problems. Guidelines for dealing with common issues of motivation; missed appointments; relapses; crises; suicidal threats and/or attempts; violent acts. This section will vary according to the nature of the intervention.

  12. (p.63)
  13. 11. Relationship with other formal and informal supports/resources. Relationship and role of natural supports; necessary adjunctive services and treatments; referral processes for other services, treatments, programs, etc.; procedures for integrating and monitoring other needed services; administrative arrangements with agency and governmental entities.

  14. 12. Process for managing transitions and terminations. Criteria and guidelines for transitions to other services, treatments, programs; procedures for termination and referrals, guidelines for working with clients, natural supports, and formal supports and resources.

  15. 13. Selection of providers. Specific educational, training credentials, and experience requirements for the various intervention provider positions.

  16. 14. Training of providers. Goals of training; training components and approaches to training, including didactic and experiential; troubleshooting; booster sessions; training delineated by varying positions of providers.

  17. 15. Supervision of providers: Recommendations for frequency, type, goals, and intensity of supervision of various providers; strategies and methods for assuring adherence; preventing and correcting service or program drift (that is, fidelity monitoring forms and approaches to assessing implementation of the intended intervention).

Criticisms of Treatment Manuals

Although the use of treatment manuals offers a number of advantages, including ensuring the standardization of treatment, a number of criticisms have been directed toward them. These criticism include: (a) limited application to clinical practice with diversified populations who often have complex problems; (b) overemphasis on specific techniques as opposed to competency of positive relationship formation; (c) a focus on technique rather than theory; (d) restrictive use of clinical expertise and judgment; (e) reduction of provider competence due to focus on (p.64) adherence to technique; (f) lack of applicability to diverse providers with varied training, experience, and expertise (Carroll & Rousaville, 2008); and (g) designed for highly motivated and single-problem clients (Havik & VandenBos, 1996). Some of these concerns, such as a lack of applicability to settings and populations, are inherent in the design of these manuals, as they were developed for efficacy RCTs, as opposed to community-based psychosocial RCTs. Other criticisms are an advantage for community-based settings, such as being highly focused on technique. With providers who have limited training and/or experience, a well-structured blueprint provides clear directions in an area in which they may not otherwise have knowledge and/or skill. Well-designed manuals allow for flexibility, recognizing that clinical judgment regarding when clients can progress to the next stage is essential to good treatment.

Identifying Treatment Manuals

Given that community-based psychosocial RCTs take place in actual clinical settings or agencies, these existing manuals, if appropriate to the specific intended intervention, will need to be modified in order to fit the environmental context as well as the diversity of the population to be served. Existing manuals are a good starting point, as they offer direction and a model as to how to proceed in developing one. Numerous manuals are available in the psychotherapy arena, particularly for cognitive behavioral interventions.

Treatment manuals can vary extensively. Some are detailed texts on a given approach such as, Assertive Outreach in Mental Health: A Manual for Practitioners (Burns & Firn, 2003); however, this work does not offer clear direction for implementing an intervention. A number of federal agencies, such as the National Institute of Drug Abuse (NIDA), National Institute of Alcohol Abuse and Alcoholism (NIAAA), and the Substance Abuse and Mental Health Services Administration (SAMHSA), provide manuals and other types of material that functionally serve as manuals. For example, SAMHSA has issued draft implementation resource kits for six EBPs for adults with severe mental illness. At the beginning of each kit, it is noted that “an implementation resource kit is a set of material-written (p.65) documents, videotapes, PowerPoint presentations, and a website that support implementation of a particular treatment practice.” The materials are written for each of the relevant stakeholder groups: consumers of mental health services, family members and other supporters, practitioners and supervisors, program leaders of mental health programs, and public mental health authorities. The resource kit materials are designed to address three stages of change: engaging and motivating for change (i.e., why do it), developing skills and supports to implement change (i.e., how to do it), and sustaining the change (i.e., how to maintain and extend the gains). These toolkits are directed to structured programs that require input from state and local mental health authorities to administratively and financially support the intervention programs. All of these psychosocial intervention kits are for programs on which a number of effectiveness RCTs have been conducted. They are a useful source for potential RCT interventions adapted for new or specialized populations/settings or for determining how a practice manual or resource kit is devised.

A literature search conducted for conceptualizing and developing the RCT will likely be a source for locating relevant manuals or comparable materials. Furthermore, a number of educational curricula may serve as a starting place for educational-type interventions. Many of these resources may be found by conducting relevant Web searches. Inquiring of experts in a specified topic area may likely produce relevant materials as well.

Adapting Existing Treatment Manuals

Researchers have noted the importance of utilizing ethnography in adapting existing treatment manuals (Wingood & DiClemente, 2008). Currently, a few models offer guidance for accommodating evidence-based interventions (EBIs) to different settings and cultural groups. One such model is ADAPT-ITT, which has eight phases:

  1. 1. Assessment. Conducting focus groups, elicitation interviews, or needs assessment.

  2. 2. Decision. Reviewing interventions, deciding on interventions and whether to adopt or adapt

  3. (p.66)
  4. 3. Adaptation. Using innovative pretesting methods

  5. 4. Production. A draft of the adapted EBI

  6. 5. Topical experts. Offering their input

  7. 6. Integration. Integrating content provided by topical experts

  8. 7. Training. Training those involved in testing the adapted intervention

  9. 8. Testing. Conducting pilot testing (Wingood & DiClemente, 2008)

Cavanaugh (2007) developed and tested a brief, targeted, psycho-educational intervention for the prevention of intimate partner violence, employing some of the aspects of Dialectical Behavior Therapy (DBT) for borderline personality disorders and utilizing specific exercises and hand-outs from Linehan’s (1993) DBT skills-training manual. In certain instances, the desired intervention may be a combination of more than one intervention manual. In a study in which the first author is currently involved, two validated interventions were combined: RESPECT, a program developed by the Centers for Disease Control (CDC) as an HIV prevention program that utilizes one-on-one counseling to reduce at-risk sexual behavior in a multisite demonstration, and the Community-Based Outreach Model (CBOM), which was designed to reduce the risk of HIV and other blood-borne infections in drug users. Both are highly structured and manualized practices with demonstrated effectiveness.

Preventing AIDS Through Health (PATH) was developed by revising the above-mentioned interventions. PATH was designed to be delivered by case managers to adults with severe mental illness and substance abuse problems. To aid in the delivery of PATH, a set of cards placed on rings were constructed along with an operational manual. The cards contained the content of the intervention and were written at a level of literacy appropriate to its client population, along with illustrations to help clarify the material presented. The intervention is designed so that a case manager reviews the cards with the clients. This structured approach was necessitated by the fact that case managers were not highly trained in this content area. Furthermore, the cards are easy to manipulate and are transportable, which is advantageous, as many of the services are (p.67) delivered in community settings (e.g., clients’ homes, homeless shelters, or residential facilities).

Practice guidelines are broader than manuals, but are also a helpful way to begin planning and developing an intervention for a community-based psychosocial RCT, particularly when designing the intervention from the ground up. Similar to manuals, practice guidelines indicate what services to deliver, to whom, and how. However, guidelines are not as program-specific as manuals and are targeted for services directed at specific populations across a range of services (Bond, et al., 2000). Rosen and Proctor (2003) defined practice guidelines as “a set of systematically compiled and organized statements of empirically tested knowledge and procedures to help practitioners select and implement interventions that are most effective and appropriate for attaining the desired outcomes” (p. 1). Practice guidelines differ in their degree of specificity (Bond et al., 2000). Although numerous guidelines have been developed, most are not specifically designed for social work issues, populations, or practice methods (Howard & Jenson, 2003). However, some may be relevant to social work type interventions.

Focus groups and in-depth interviews with providers, service recipients, supervisors, and administrators may assist in defining the parameters of an intervention. In the development of a Social Enhancement Workbook to increase participation in community resources by adults with severe mental illness, the first author (working with another social worker) conducted process groups with supervisors and providers of case management services in order to receive their input on a draft outline of the workbook. Interestingly, it was discovered that the term “teaching skills” could not be used, as “teaching” was not a Medicaid-reimbursable service. The workbook used terms such as “helping to access resources in the community,” as this was a fundable service (and perhaps more descriptive of what was being implemented, anyway). Also, individual input was sought from service recipients who were potential users of the workbook. Suggestions from these sources helped to determine some of the content and structure of the workbook. For example, the workbook was designed with “tips” on the sides of the pages, one side was for supporters and the other was for the users. “Tips” for supporters included: “Be (p.68) prepared to help the person practice . . .”; and “If you can show the person examples . . . ,” etc. Also, a worksheet was included in the booklet, so that clients could do a self-assessment, identify their goals, and develop a plan for achieving their goals.

In the development of their couple HIV prevention intervention, the researchers from Columbia School of Social Work mentioned previously used input from focus groups as a means to developing their intervention (Sormanti, Pereira, El-Bassel, Witte, & Gilbert, 2001). Based on the results of the focus group, the investigators incorporated specific elements into the intervention. For example, they discovered that “both men and women report that poor communication was a critical issue in their relationships and were eager to learn new ways to enhance their communication skills” (Sormanti, et al., 2001, p. 319). In response to this concern, the investigators incorporated the Speaker-Listener Technique, a tested method for improving couples communication. Participants also voiced concerns about condom use interfering with love-making, which resulted in a session being devoted to eroticizing condom use. These examples illustrate how focus groups with potential participants can help to delineate some of the service elements of the intervention that are necessary prerequisites to developing community-based psychosocial intervention manuals.

In addition to focus groups, other, more structured approaches may serve to define and refine an existing program. These approaches are the nominal group process, Delphi method, and concept mapping. These methods are more controlled procedures for obtaining consensus on the specific service elements and processes of a service program. The Delphi method is a structured procedure for generating ideas from a group of individuals and, through rounds of controlled feedback, to come to a consensus on the topic (or, in this instance, a service program). This method was employed “to identify a valid and reliable set of categories to describe the clinical work practices of intensive case management” (Flander & Burns, 2000). The researchers categorized service activities that matched everyday practices of the clinicians. Concept mapping employs a group process to brainstorming about a topic, procedures for sorting and rating the emerged items, and multivariate techniques to develop clusters (p.69) regarding the items, with the final output being a concept map (that is, a visual display of the categories in relation to each other) that the group then interprets. This procedure has been used to outline program activities and their sequences, as well as the contextual elements that impact the program for supported employment for individuals with severe mental illness (Trochim, Cook, & Setze, 1994). These methods offer approaches to operationally define the activities of a service/program into categories and procedures, in order to structure and standardize the intervention.

The next case example of Building Practice-Based Evidence offers a review of existing manuals that includes input from experts to develop a standardized intervention. This process can be used to define and delineate an intervention for an RCT. A similar process of using experts has been employed in developing the elements of the Assertive Community Treatment (ACT; McGrew & Bond, 1995). ACT’s clearly defined program and structural elements have enabled a number of RCTs to be conducted, with the result that ACT has become an EBP for adults with severe mental illness.

This case example of practice-based evidence demonstrates the nature of the work that needs to be undertaken to outline the structures and processes of an existing service program that is well-regarded, but not clearly defined. Such efforts serve to provide focus and clarity to support the development of a rigorous RCT that is capable of informing practice, as well as moving the field forward from practice-based evidence towards EBP. It is not uncommon to be interested in conducting an RCT on an existing service model that is variably implemented or that is ill-defined, but considered by many to be a promising practice with limited supporting empirical evidence.

The development of a treatment manual is an iterative process that requires pilot implementation of the intervention. Based on results of the pilot work, the manual is revised and reimplemented. Eventually, the manual is piloted with providers who are trained in the intervention, and the implementation is evaluated using both quantitative approaches (e.g., fidelity assessment) and qualitative approaches (e.g., in-depth interviews, focus groups, ethnographic methods). The treatment manual is then revised and/or refined and is sufficiently ready to be tested by an RCT. If (p.70) (p.71) the manualized intervention is not feasible and acceptable to providers in the settings for which it is intended, it is unlikely to be implemented. However, if the intervention is appealing and practical, it has a greater likelihood of being implemented (Carroll & Nuro, 2002).

Developing and Piloting Fidelity Assessment

The elements detailed in the treatment manual provide the basis for assessing the fidelity of the intervention (i.e., whether the intervention was conducted as planned and is consistent with service or program elements delineated in the manual, including structures and goals). Frequently monitoring forms (e.g., a fidelity scale) are included in the (p.72) treatment manual. A fidelity assessment is essential to evaluating the extent to which the planned intervention is actually implemented as intended (Orwin, 2000). Otherwise, researchers may incorrectly conclude that the intervention did not produce its intended objectives, when the real culprit was deviation from the planned intervention. Or, the conclusion may be that the effective outcome was a result of the intervention, when in fact, providers enhanced or changed the service program from what the investigator intended. Thus, at the point of development of the intervention, it is necessary to craft the procedures for monitoring the integrity of the intervention and then to test the fidelity measure to ensure that it is valid and reliable. Fidelity measures are scales or tools that assess the adequacy of implementation of a service or program, or provide a means to quantify the degree to which the service elements in a program or service are implemented (Bond, et al., 2000). Basically, the question of implementation of the intervention is not answered by a simple yes or no response, but rather to what degree. The purpose of these measures is to verify that the intervention is being implemented in a manner consistent with the service or program, as it is delineated in the treatment manual, workbook, or educational curriculum. Successful implementation has to do not only with whether the service program was delivered by the providers as intended, but also whether the program was received by the recipients (Orwin, 2000). Consequently, multiple approaches are needed to obtain both the provider and the recipients’ perspective. Furthermore, treatment differentiation (ensuring that the intervention differs from other similar services) must be assessed (Bellg, Borrelli, Resnick, & Hecht, et al., 2004). Therefore, some components of the fidelity measure may need to capture the service elements of other interventions as well.

In conducting an RCT, one must assess the extent of contamination between conditions. Orwin (2000) refers to this type of assessment measure as a leakage scale, one that captures the degree to which participants in the control condition received services planned only for the experimental intervention. Frequently, the fidelity measure may serve this purpose as well by having providers in the control condition complete the same scale as the experimental providers.

(p.73) The key to developing a sound fidelity measure is to have a well-defined service or practice model for the intervention. The treatment manual needs to define the structural elements and behaviors that are measurable in order to create a quantifiable scale that can be used in future analyses of the RCT’s effectiveness. However, developing a fidelity measure for community-based psychosocial interventions is more difficult than for psychotherapy, which is the context for which these measures were originally developed. As Bruns and colleagues (2004) noted, “When considered for complex, individualized, or multimodal treatments, such as community-based treatments for youth and families fidelity assessment becomes particularly difficult” (p. 80). This is due to the complex structural and administrative characteristics of the programs and systems within which they are embedded.

Frequently used methods for assessing fidelity are self-report forms completed by either or both providers and participants, but more multi-method approaches are encouraged (Bond et al., 2000). These methods may include chart reviews, observation of elements of the service, data extraction from service billing forms, service logs, or videotaping with ratings by observers (as used by McKay in the case example in Chapter 1).

Mowbray and colleagues (2003) delineated a three-step process for developing a fidelity assessment instrument: identifying and specifying fidelity criteria, measuring fidelity, and assessing the reliability and validity of fidelity criteria. Fidelity criteria are comprised of two aspects of the program or service: the structure or the framework of service delivery, and the process or manner in which the service is delivered. The criteria generally include the following: “specification of length, intensity, and duration of the service (or dosage); content, procedures, and activities over the length of the service; roles, qualifications, and activities of staff; and inclusion/exclusion characteristics for the target service population” (Mowbray, et al., 2003, p. 318). Bond and his colleagues (2000) detailed a process for developing fidelity assessments employing a 14-step process (Table 3.2).

Bond and colleagues (2000) note that if the purpose of the scale is for an RCT, then the fidelity measure must be comprehensive, identifying both those aspects of the program/service that are unique and those that distinguish it from the control condition. Thus, it is apparent that (p.74)

Table 3.2. Steps for Developing a Fidelity Measure

  • Define the purpose of the fidelity scale.

  • Assess the degree of model development.

  • Identify model dimensions.

  • Determine if appropriate fidelity scales already exist.

  • Formulate fidelity scale plan.

  • Develop items.

  • Develop response scale points.

  • Choose data collection sources and methods.

  • Determine item order.

  • Develop data collection protocol.

  • Train interviewers/raters.

  • Pilot the scale.

  • Assess psychometric properties.

  • Determine scoring and weighting of items.

Source: Bond et al. (Nov. 2000). Psychiatric Rehabilitation Fidelity Toolkit.

the construction of a fidelity assessment measure is closely intertwined with treatment manual development. A lack of specificity of the program model led Carol Mowbray to develop a fidelity rating instrument for consumer-run drop-in centers (CRDIs; Holter, Mowbray, Bellamy, MacFarlane, & Dukarski, 2004; Mowbray, Holter, Stark, Pfeffer, & Bybee, 2005a). When Mowbray consulted with the first author about designing an RCT on CRDIs, she was advised to first determine what the critical elements were and how they differed from the control condition before she proceeded with the RCT (Mowbray, Holter, Mowbray, & Bybee, 2005b). Without this preliminary work, it would have been less likely that the RCT would contribute to the evidence for consumer-operated drop-in centers, because the results of any difference could not be interpreted given that the control condition shared a number of service elements with CRDI centers. Mowbray received National Institute of Mental Health (NIMH) funding for the development of a fidelity measure of CRDI centers (see Case Example). Her initial research included the use of both quantitative and qualitative methods. Developing a fidelity measure often requires a mixed-method approach. (p.75) (p.76)

(p.77)

If interventions are straightforward, such as educational interventions, a checklist can be created with the elements of the intervention. For the previously discussed HIV educational prevention intervention for those with severe mental illness, a form that mimics the service document that the case managers complete for billing purposes was developed. The case managers indicated on this form the card numbers that they discussed for the particular session. This process demonstrated whether all the cards were reviewed with each study participant or exactly which cards were reviewed with each client, the number of sessions of the intervention, and how much time was spent at each session.

Observation checklist may be developed and then researchers may observe the providers delivering the intervention. For example, in the first author’s RCT of HIV prevention intervention, experimental interventionists were periodically observed via a one-way mirror to determine whether they were delivering the intervention as intended. Observers recorded their observations on a checklist that was based on the intervention and were provided praise and critical feedback as well as offered booster session on delivering the intervention, if necessary.

(p.78) In the social participation workbook intervention, a checklist was developed delineating in which activities the providers may engage with the participants. These forms were collected from both the experimental and control condition providers to assess fidelity and leakage (i.e., control condition providers delivering any of the activities of the experimental intervention). In the outcome interviews conducted at six-month intervals, participants were asked about the extent to which they engaged in these intervention activities, in order to assess fidelity and leakage from the participants’ perspectives.

The importance of fidelity assessment is essential, as it has significant implications for internal, external, and construct validity, and statistical power (Moncher & Prinz, 1991). Fidelity is necessary to maintain internal validity and to be assured of a fair comparison between conditions. Fidelity assessment is an efficient way to know if contamination has occurred, and to specify the nature of the contamination. Structured manuals are the key to the reproducibility of the intervention in other settings.

Conclusion

Planning a community-based psychosocial RCT requires both time and effort. Initially, one must assess whether the knowledge base is developed sufficiently to warrant designing an RCT and whether the social political timing is opportune for such an endeavor. Once deciding to move forward, the process of selecting and negotiating with a setting may precipitate the rethinking of one’s innovative idea, because the real world of services may not be receptive. Collaborative processes will require more than a knowledge of science, and will include social work practice, advocacy, and collaborative skills as well. The planning of an RCT is an iterative process of acquiring data, reconceptualizing, obtaining more data, and returning to the drawing board. Furthermore, one must have confidence in the selected site being able and capable of delivering the intended service interventions. A valid RCT that will contribute to EBP necessitates the assurance of high-quality practice as well as good science.

(p.79) This chapter demonstrated that the planning of an RCT encompasses a number of small-scale research studies. Extensive consideration based on empirical data must be given to whether enough eligible and willing clients/volunteers can be recruited for randomization to all of the study conditions and be retained through the duration of the study. It is incumbent upon all those involved to be particularly mindful of the importance of a diverse sample that accurately represents the target population.

To meet the criteria of competently provided interventions delivered in a standardized and reproducible manner, a practice intervention manual must be utilized. Given that many treatment manuals were developed for efficacy trials rather than effectiveness studies, these manuals may entail a good deal of work to adapt them to environments more common to social work settings. In some instances, the researcher will need to undertake additional preliminary work to create a treatment manual in an area in which there is little from which to build upon. As previously outlined, strategies and methods for adapting and developing manuals require a variety of pilot studies involving both qualitative and quantitative approaches. Along with the development of the service or program manual is the construction of a fidelity assessment. As was made evident, these two major preliminary activities are intertwined and, in some instances, may be conducted concurrently rather than sequentially. Having successfully completed the necessary pilot work, one can proceed with confidence in having a solid foundation on which to begin conceptualizing and designing an RCT.

For Further Reading

Bibliography references:

Bond, G., Williams, J., Evans, L., Salyers, M., Kim, H.W., Sharpe, H., Leff, S. (Nov. 2000). Psychiatric Rehabilitation Fidelity Toolkit, Cambridge, MA: Human Services Research Institute.

Carroll, K. (Ed.). Improving compliance with alcoholism treatment. Bethesda, MD: National Institute of Alcohol Abuse and Alcoholism.

Nezu, A. & Nezu, C. (Eds.). Evidence-based outcome research. New York: Oxford University Press.