Research Article Evaluating Completeness of Reporting in Behavioral Interventions Pilot Trials: A Systematic Survey Meha Bhatt1,2, Laura Zielinski2, Nitika Sanger2, Ieta Shams2, Candice Luo2, Bianca Bantoto2, Hamnah Shahid2, Guowei Li1, Luciana P. F. Abbade3, Ikunna Nwosu1, Yanling Jin1, Mei Wang1, Yaping Chang1, Guangwen Sun1, Lawrence Mbuagbaw1,4, Mitchell A. H. Levine1,4,5, Jonathan D. Adachi1,4, Lehana Thabane1,4,5, and Zainab Samaan1,2,6,7 Abstract Purpose: This systematic survey evaluates the completeness of reporting in pilot and feasibility randomized controlled trials investigating behavioral interventions based on the Consolidated Standards of Reporting Trials (CONSORT) extension for pilot trials. Methods: The authors searched Medline/Pubmed and randomly selected 100 articles from 2012 through 2016 to determine the proportion of reported CONSORT extension items. They examined study factors related to reporting, including year and country of publication, psychotherapy intervention, multiple centers, industry funding, and journal endorsement of CONSORT. Results: The authors found that the mean reporting score on the CONSORT extension was 51.6% (SD ¼ 15.1). Studies of psychotherapy interventions had significantly higher reporting scores than other interventions (incidence rate ratio ¼ 1.10, 95% confidence interval: 1.01–1.20). Conclusions: Our findings indicate that current reporting quality is suboptimal. Many included trials failed to provide rationale for piloting, assess feasibility objectives, or indicate clear progression to a future large trial. Reporting quality should be reevaluated following uptake of the 2016 CONSORT extension for pilot trials. Keywords pilot trials, feasibility trials, behavioral interventions, reporting quality, transparency, guideline adherence Pilot and feasibility studies are an essential prelude of clinical trials, as they can directly give researchers necessary informa- tion about the practicality and acceptability of the design fea- tures of a definitive trial (Thabane et al., 2010). This is especially necessary for pilot and feasibility randomized con- trolled trials (RCTs) of behavioral interventions, as they are crucial in evaluating resource costs, required personnel, recruitment and attrition rates, and acceptability of the inter- vention and outcome measures (Craig et al., 2008; Van Teijlin- gen & Hundley, 2010). Researchers have indicated that pilot and feasibility trials receive little attention in methodological literature; however, given the immense quantity of resources required for large RCTs, it is important that pilot trials are conducted with rigorous methodology to adequately inform the definitive trial (Thabane et al., 2010; Van Teijlingen & Hund- ley, 2010). Initially published in 1996, the Consolidated Standards of Reporting Trials (CONSORT) statement has improved the transparency and reporting of RCTs and has been widely adopted by journals (Begg et al., 1996). Subsequent revisions of the CONSORT statement, as well as extensions for varying 1 Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada 2 Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada 3 Department of Dermatology and Radiotherapy, Botucatu Medical School, Universidade EstadualPaulista, UNESP, São Paulo, Brazil 4 Father Sean O’Sullivan Research Centre, St. Joseph’s Healthcare Hamilton, Hamilton, Ontario, Canada 5 Centre for Evaluation of Medicines, Programs for Assessment of Technology in Health (PATH) Research Institute, McMaster University, Hamilton, Ontario, Canada 6 Peter Boris Centre for Addictions Research, St. Joseph’s Healthcare Hamil- ton, Hamilton, Ontario, Canada 7 Population Genomics Program, Chanchlani Research Centre, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada Corresponding Author: Zainab Samaan, Mood Disorders Program, St. Joseph’s Healthcare Hamilton 100 West 5th St., Hamilton, Ontario, Canada L8N 3K7. Email: samaanz@mcmaster.ca Research on Social Work Practice 2018, Vol. 28(5) 577-584 ª The Author(s) 2017 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/1049731517720033 journals.sagepub.com/home/rsw https://us.sagepub.com/en-us/journals-permissions https://doi.org/10.1177/1049731517720033 http://journals.sagepub.com/home/rsw http://crossmark.crossref.org/dialog/?doi=10.1177%2F1049731517720033&domain=pdf&date_stamp=2017-07-23 trial designs, have been published (Moher, Schulz, Altman, & Consort, 2001). However, it was not until 2016 that the CON- SORT statement extension for reporting randomized pilot and feasibility trials was published simultaneously in Pilot and Feasibility Studies and BMJ (Eldridge et al., 2016a; Eldridge et al., 2016b). Existing evaluations of pilot trials methodology indicate that reporting is inadequate, largely due to emphasis on hypothesis testing and lack of criteria for evaluating feasibility (Arain, Campbell, Cooper, & Lancaster, 2010; Whitehead, Sully, & Campbell, 2014). The publication of the CONSORT extension to pilot RCTs is expected to enhance the complete- ness and transparency in the reporting of pilot RCTs. This survey aims to review the completeness of the reporting of pilot trials in behavioral interventional research. A behavioral inter- vention is defined as one which includes behavioral or social component(s) or targets behavioral or social outcomes (e.g., trials related to education, training, and cognitive-behavioral therapy; National Institutes of Health, 2004). Trials of beha- vioral interventions are the second most common type regis- tered in clinicaltrials.gov (Bourgeois, Murthy, & Mandl, 2012). Previous studies have suggested that quality of reporting among RCTs of nonpharmacological interventions is lower in comparison to that in pharmaceutical trials (Mbuagbaw et al., 2014). Therefore, it is important to specifically evaluate studies of behavioral interventions to identify areas for improvement. Objectives The current systematic survey aims to evaluate published pilot and feasibility trials of behavioral interventions in terms of quality of reporting using the newly published CONSORT pilot extension as a benchmark. More specifically, we aim to: 1. evaluate the quality of reporting as measured by com- pleteness of reporting based on the 2016 CONSORT extension to pilot and feasibility trials among pilot and feasibility trials investigating behavioral interventions in clinical populations and 2. explore factors that are associated with the complete- ness of reporting. Methods Study Eligibility We included pilot and feasibility RCTs in this systematic sur- vey such that the study used the words “pilot” or “feasibility” and identified itself as an RCT in the title or abstract. Further- more, studies were selected if they investigated behavioral interventions in clinical populations, defined as a nonpharma- cological approach requiring an active, behavioral change from the receivers during or following the intervention. Such beha- vioral changes can subsequently aim to improve various med- ical, psychological, or social outcomes. We utilized a broad definition of clinical populations, including individuals with current or past medical or psychiatric illnesses or symptoms of these conditions. Studies that evaluated group therapies involving persons with illnesses and their caregivers were also included. The articles were limited to English-language publications. Exclusion criteria were (1) single-arm observational pilot or feasibility studies, (2) quasirandomized trials, and (3) studies among nonclinical populations (i.e., students, community members). Search Strategy We searched Medline/Pubmed through Ovid and included studies published from January 1, 2012, to December 31, 2016. The complete search strategy is available as supplemen- tary material (Appendix 1). We randomly selected 20 articles from each year from 2012 through 2016 in order to have equal number of studies from the past 5 years, for a total of 100 included articles. A computerized random number generator was used for study selection. Two authors (MB and LZ) inde- pendently screened and conducted a full-text review of ran- domly selected citations to assess eligibility. Excluded articles were replaced by another random selection until 100 articles were identified that met the eligibility criteria. Outcome Measures The primary outcome of this survey was to evaluate the pro- portion of reported items on the CONSORT statement exten- sion for randomized pilot and feasibility trials. The CONSORT statement extension for randomized pilot and feasibility trials consists of 39 checklist items. Independent pairs of reviewers conducted the scoring for the checklist items. Each of the items on the checklist was scored as either yes or no (1 ¼ yes, 0 ¼ no), indicating whether or not the article reported the appropri- ate information according to the criteria outlined in the original publication (Eldridge et al., 2016a). Since for the included articles, there were differing numbers of items that were “not applicable” to each study, a final CONSORT score was calcu- lated as a percentage of the number of items reported over the total applicable number of items. We also reported the number of studies that reported each individual item on the checklist in order to identify the items with the lowest level of reporting. The secondary outcome was to determine which study charac- teristics are associated with the level of reporting. Data Extraction Three pairs of reviewers performed data extraction in dupli- cate. Disagreements were resolved by consensus or by consult- ing an experienced author. An experienced author trained the reviewers through detailed description of each item on the CONSORT statement and the extension for pilot trials. The extraction form was piloted using 5 studies in order to ensure that all reviewers were calibrated. The extracted infor- mation included: (1) article characteristics (title, author name, year of publication, journal, and study region); (2) study details 578 Research on Social Work Practice 28(5) http://clinicaltrials.gov (clinical population, type of behavioral intervention, and num- ber of participants randomized); (3) reporting of individual items on the CONSORT extension for pilot and feasibility stud- ies; and (4) journal journal endorsement of CONSORT statement. Data Analysis First, we reported descriptive statistics to provide a summary of the overall level of reporting using the CONSORT statement. Means and standard deviations were used for overall reporting quality scores and counts, and percentages were used to report the general characteristics and number of articles reporting each CONSORT statement item. Second, to explore factors associated with completeness of reporting, we used Poisson regression with the number of reported items CONSORT items as the dependent variable and the following as explanatory factors—year of publication, study location (North America or other), type of intervention (psychotherapy or other), multisite study (yes or no), industry funding (yes, no, or not reported), and journal endorsement of CONSORT (yes or no). For the purposes of the analysis, we considered “not applicable” items as reported by the corre- sponding studies. Unadjusted and adjusted incidence rate ratio [IRR], 95% confidence interval [CI], and p value are reported. The level of significance was set at a ¼ 0.05. STATA 13 Software (StataCorp. 2013. Stata Statistical Software: Release 13, StataCorp LP, College Station, TX) was used to perform all statistical analyses. Results Our initial search retrieved 1,770 articles, of which 300 were published in 2012, 396 in 2013, 438 in 2014, 455 in 2015, and 181 in 2016. Of these, we randomly selected 20 articles from each year for a total of 100 included articles. A complete ref- erence list of included studies and details of individual studies are available as supplemental material (Appendices 2 and 3, respectively). During the random selection process, articles were excluded if they were not a pilot or feasibility RCTs and if they were not investigating behavioral interventions. Table 1 presents the general characteristics of included arti- cles. Overall, the majority of studies were single center (89%) and did not receive industry funding (84%). Additionally, 40% of the studies were published in journals that endorsed the standard CONSORT statement. Evaluation of Reporting Quality Based on CONSORT Extension The mean CONSORT reporting score across all included arti- cles was 51.6% (SD ¼ 15.1). Table 2 shows the level of report- ing of individual CONSORT items. Two items were not applicable for any of the included studies: “Changes to pilot trial assessments or measurements” and “Ancillary analyses.” Upon evaluating reporting of individual items in the CON- SORT statement, we found that the items reported by the largest proportion of articles were “eligibility criteria for participants” (94%) and “analytical methods” (92%). Many poorly reported items were those related to specific pilot study objectives and progression to the future definitive trial includ- ing “rationale for the future definitive trial and reasons for the pilot trial” (24%); “prespecified criteria used to judge whether, or how, to proceed with future definitive trial” (3%); “generalisability (applicability) of pilot trial methods and find- ings to future definitive trial and other studies” (32%); and “implications for progression from pilot to future definitive trial, including any proposed amendments” (24%). Also, a number of studies did not appropriately report study methodol- ogy such as “rationale for numbers in the pilot trial” (22%), “allocation concealment” (33%), and “implementation of randomization” (30%). Additional items that were poorly reported included “harms or unintended effects in each group” (19%), “where the pilot trial protocol can be accessed” (12%), Table 1. Distribution of Characteristics of Included Articles by Year of Publication. Characteristic Total, n ¼ 100 (%) Year 2012 20 2013 20 2014 20 2015 20 2016 20 Country United States 51 Canada 7 United Kingdom 6 Sweden 5 Other 31 Type of intervention Psychotherapy 32 Exercise 18 Mindfulness 5 Yoga 5 Combined 22 Othera 18 Multisite study Yes 11 No 89 Industry funding Yes 5 No 84 Not reported 11 Prelude to future definitive trial Yes 13 No 87 Journal endorsement of CONSORTb Yes 40 No 60 Note. CONSORT ¼ Consolidated Standards of Reporting Trials. aOther interventions included education, art/music/humor therapy, relaxation training, feedback-based interventions, and more specific approaches. bGiven the recency of the CONSORT extension for pilot and feasibility trials, we assessed whether the journal endorsed the overarching CONSORT statement. Bhatt et al. 579 Table 2. Reporting of 39 Items on the CONSORT Extension for Pilot and Feasibility Trials. Item Criteria Total, n Title and abstract 1a Identification as a pilot or feasibility randomized trial in the title 41 1b Structured summary of pilot trial design, methods, results, and conclusions 63 Introduction Background and objectives 2a Scientific background and explanation of rationale for future definitive trial, and reasons for randomized pilot trial 24 2b Specific objectives or research questions for pilot trial 37 Methods Trial design 3a Description of pilot trial design (such as parallel, factorial) including allocation ratio 36 3b Important changes to methods after pilot trial commencement (such as eligibility criteria), with reasons 2 Participants 4a Eligibility criteria for participants 94 4b Settings and locations where the data were collected 81 4c How participants were identified and consented 77 Interventions 5 The interventions for each group with sufficient details to allow replication, including how and when they were actually administered 79 Outcomes 6a Completely defined prespecified assessments or measurements to address each pilot trial objective specified in 2b, including how and when they were assessed 49 6b Any changes to pilot trial assessments or measurements after the pilot trial commenced, with reasons NA 6c Prespecified criteria used to judge whether, or how, to proceed with future definitive trial 3 Sample size 7a Rationale for numbers in the pilot trial 22 7b When applicable, explanation of any interim analyses and stopping guidelines 1 Sequence generation 8a Method used to generate the random allocation sequence 42 (continued) Table 2. (continued) Item Criteria Total, n 8b Type of randomization(s); details of any restriction (such as blocking and block size) 40 Allocation concealment 9 Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned 33 Implementation 10 Who generated the random allocation sequence, enrolled participants, and assigned participants to interventions 30 Blinding 11a If done, who was blinded after assignment to interventions (e.g., participants, care providers, those assessing outcomes) and how 46 11b If relevant, description of the similarity of interventions 14 Analytical methods 12a Methods used to address each pilot trial objective whether qualitative or quantitative 92 Results Participant flow 13a For each group, the numbers of participants who were approached and/or assessed for eligibility, randomly assigned, received intended treatment, and were assessed for each objective 78 13b For each group, losses and exclusions after randomization, together with reasons 74 Recruitment 14a Dates defining the periods of recruitment and follow-up 33 14b Why the pilot trial ended or was stopped NA Baseline data 15 A table showing baseline demographic and clinical characteristics for each group 73 Numbers analysed 16 For each objective, number of participants (denominator) included in each analysis by randomized group 80 Outcomes and estimation 17a For each objective, results including expressions of uncertainty (such as 95% CI) for any estimates by randomized group 40 (continued) 580 Research on Social Work Practice 28(5) and “registration number for pilot trial and name of trial registry” (29%). Overall, 19 of the 39 checklist items were reported by less than half of the included studies. Factors Related to Reporting of CONSORT Extension Items Table 3 shows the unadjusted and adjusted IRRs for overall CONSORT reporting by study characteristics. In comparing the total number of reported CONSORT items by the prespe- cified study characteristics, the unadjusted analyses showed that studies investigating psychotherapy rather than other types of interventions had significantly higher CONSORT reporting scores (unadjusted IRR ¼ 1.12, 95% CI [1.02, 1.22], p ¼ .01). The remaining study characteristics were not significantly associated with the level of reporting, although results were borderline significant for higher reporting quality among arti- cles published in journals that endorse the CONSORT state- ment (unadjusted IRR ¼ 1.09, 95% CI [1.00, 1.18], p ¼ .05). In the adjusted analysis, reporting in psychotherapy inter- vention studies remained significant in comparison to other interventions (adjusted IRR ¼ 1.10, 95% CI 1.01–1.20, p ¼ .03). Results reached borderline significance indicating that studies conducted in North America had lower reporting qual- ity compared to those outside North America (adjusted IRR ¼ 0.91, 95% CI [0.84, 1.00], p ¼ .05). The remaining study char- acteristics, including year of publication, multisite study, industry funding, and journal endorsement of CONSORT, were not significantly related to the level of reporting in the adjusted analysis. Studies that did not report funding had lower report- ing than those that did not receive industry funding (adjusted IRR ¼ 0.83, 95% CI [0.72, 0.96], p ¼ .01). Discussion In this systematic survey, we found that the reporting quality based on CONSORT guidelines for pilot and feasibility RCTs is currently suboptimal, with mean overall reporting score at 51.6%. First, only 41% of the included articles identified them- selves as pilot and feasibility RCTs in the title. The actual deficiency in reporting may be even higher, as our eligibility criteria for the review required an indication of study design in the title or abstract. Other areas of poor reporting included methodological areas related to randomization, such as “sequence generation,” “allocation concealment,” “implementation,” and “blinding,” all of which were reported by less than half of the studies. Multiple systematic reviews have also shown deficiencies in reporting of these areas across main/definitive RCTs (Cairo, Sanz, Matesanz, Nieri, & Pagliaro, 2012; Liu, Morris, & Pengel, 2013). It is especially important for authors to provide information on these metho- dological areas, since it informs the risk of bias assessment and helps to assess the methodological quality of the study (Higgins et al., 2011). Overall, we found that participant and intervention details, analytical methods, estimation of outcomes, and study limita- tions were reported adequately across included pilot studies (>80% adherence). Researchers should still aim to improve reporting quality of behavioral interventions and predefined outcome measures because it allows for replication of the inter- vention and assessment in various settings. Additionally, we did not find significant differences in reporting to the CON- SORT extension for pilot and feasibility RCTs by year of pub- lication from 2012 through 2016. Considering reporting of the original CONSORT statement improved over time following revisions and publication of extensions, we expect that reas- sessment in the following 5 years will show an improvement by year of publication. Type of intervention was the only study Table 2. (continued) Item Criteria Total, n Ancillary analyses 18 Results of any other analyses performed that could be used to inform the future definitive trial NA Harms 19 All important harms or unintended effects in each group 19 Discussion Limitations 20 Pilot trial limitations, addressing sources of potential bias and remaining uncertainty about feasibility 87 Generalizability 21 Generalizability (applicability) of pilot trial methods and findings to future definitive trial and other studies 32 Interpretation 22a Interpretation consistent with pilot trial objectives and findings, balancing potential benefits and harms, and considering other relevant evidence 62 22b Implications for progression from pilot to future definitive trial, including any proposed amendments 24 Other information Registration 23 Registration number for pilot trial and name of trial registry 29 Protocol 24 Where the pilot trial protocol can be accessed, if available 12 Funding 25 Sources of funding and other support (such as supply of drugs), role of funders 90 26 Ethical approval or approval by research review committee, confirmed with reference number 45 Note. NA ¼ not applicable; CONSORT: Consolidated Standards of Reporting Trials. Items 3b, 7b, 11b were not applicable for every study. Items 6b, 14b, 18 were not applicable to any of the included studies. Bhatt et al. 581 characteristic that was significantly related to improved report- ing in the adjusted analysis, with studies of psychotherapy interventions having a higher level of reporting. Notably, only 40% of the journals in this survey endorsed the CONSORT statement. Although we did not find journal endorsement to be related to significantly higher reporting in the adjusted anal- yses, improvement in reporting will likely be enhanced if jour- nals that publish pilot and feasibility trials endorse the CONSORT extension, as seen with other reporting guidelines (Samaan et al., 2013). To our knowledge, reporting quality of pilot and feasibility RCTs has not been previously investigated, and this serves as a baseline assessment of reporting. We recognize that suboptimal reporting may be due to the fact that the CONSORT extension for pilot and feasibility RCTs was only recently published in 2016. However, authors of pilot trials could have made use of the original CONSORT statement to improve reporting stan- dards of pilot trials. As seen with other reporting guidelines (Moher, Jones, Lepage, & Consort, 2001), the uptake of the CONSORT pilot trials extension is expected to improve over time, and the adherence to reporting should be reevaluated to measure the level of improvement. However, the CONSORT pilot extension has many similarities to the standard CON- SORT statement. Despite the underlying RCT design, only 8% of included articles stated that their study was reported in accordance with the CONSORT statement. Given that pilot studies of behavioral interventions are a specific area requiring attention, there may be a need to combine aspects of the TREND (Transparent Reporting of Evaluations with Nonran- domized Designs) statement for reporting behavioral interven- tions with the CONSORT extension for pilot and feasibility RCTs to increase overall level of reporting (Des Jarlais, Lyles, & Crepaz, 2004). Importantly, our survey of pilot RCTs found that reporting of items related to the rationale for the pilot study, specific pilot study objectives, and progression to future definitive trials was severely lacking. This was because many of the included pilot studies were not conducted for appropriate pilot and feasibility reasons as outlined in our previous tutorial for pilot studies (Thabane et al., 2010). The majority of the included studies did not assess feasibility of the process, issues with time and resources, or study management issues and rather focused on finding significant effects of the intervention. Furthermore, only 13% of the included studies were conducted as a prelude to a future definitive trial, while most did not directly make amendments or recommendations for future definitive trials, which serves as the primary purpose of pilot and feasibility RCTs. It is important that future pilot and feasibility RCTs assess feasibility criteria and indicate specific improvements or amendments for larger definitive trials (Thabane et al., Table 3. Incidence Rate Ratios for the Total Number of CONSORT Pilot Trial Extension Items Reported. Characteristic Unadjusted analysis Adjusted analysis Incidence rate ratio (95% CI) p-Value Incidence rate ratio (95% CI) p-Value Year of publication 2012 Ref Ref 2013 0.97 (0.85–1.11) 0.66 1.00 (0.87–1.14) 0.99 2014 0.97 (0.85–1.11) 0.69 0.96 (0.84–1.10) 0.60 2015 1.02 (0.90–1.17) 0.71 1.01 (0.88–1.15) 0.91 2016 1.00 (0.87–1.14) 0.95 1.03 (0.90–1.18) 0.67 Study location Other Ref Ref North America 0.95 (0.88–1.04) 0.26 0.91 (0.84–1.00) 0.05 Type of intervention Other Ref Ref Psychotherapy 1.12 (1.02–1.22) 0.01 1.10 (1.01–1.20) 0.03 Multisite study No Ref Ref Yes 1.08 (0.95–1.23) 0.25 1.08 (0.94–1.25) 0.25 Industry funding No Ref Ref Yes 0.93 (0.77–1.14) 0.50 0.93 (0.76–1.15) 0.51 Not reported 0.83 (0.72–0.96) 0.01 0.83 (0.71–0.96) 0.01 Journal endorsement of CONSORTa No Ref Ref Yes 1.09 (1.00–1.18) 0.05 1.08 (0.99–1.17) 0.10 Note. Ref ¼ reference category; CI ¼ confidence interval; CONSORT ¼ Consolidated Standards of Reporting Trials. The dependent variable was the total number of reported CONSORT items for each study. Items that were ‘not applicable’ were considered as reported for the purposes of this analysis. aGiven the recency of the CONSORT extension for pilot and feasibility trials, we assessed whether the journal endorsed the standard CONSORT statement. 582 Research on Social Work Practice 28(5) 2010). This is crucial among behavioral intervention trials, as they may require amendments and participants’ feedback to improve recruitment and implementation. These findings indi- cate that many pilot trials are not conducted for appropriate reasons, even in the years following the publication of a detailed tutorial in the leading journal for pilot studies. Given these findings, it is important for journals to not only endorse the pilot trial extension to the CONSORT reporting statement but also to ensure that authors are addressing appropriate objec- tives in their pilot trials. Limitations Our study was limited in that we only included English- language studies. This approach was used for resource and feasibility purposes; however, it is also important to assess the level of reporting in the broader literature. We also searched the Medline/Pubmed database alone to identify trials for inclusion. As we aimed to conduct a systematic survey of pilot trials in clinical areas, we deemed that one large database of biomedical research would be sufficient in order to randomly sample 100 pilot trials. Additionally, these findings can only be generalized to pilot trials of behavioral interventions, since RCTs investi- gating pharmaceutical interventions have been shown to have higher reporting quality (Mbuagbaw et al., 2014; Samaan et al., 2013). However, we determined that it was important to con- duct a distinct evaluation for behavioral interventions due to the suggested differences in reporting between pharmacologi- cal and nonpharmacological trials. Conclusions Pilot studies are an essential aspect of medical literature, as they prevent the loss of resources and increase feasibility for future definitive trials. Pilot and feasibility trials of behavioral interventions can inform amendments to intervention designs and provide useful participant feedback for study design improvement. As it stands, reporting quality based on the CONSORT statement extension for pilot and feasibility studies is suboptimal. Publication of the CONSORT extension specific to pilot and feasibility trials is expected to improve overall reporting. However, this baseline assessment highlights areas with low quality of reporting to guide researchers conducting pilot trials of behavioral interventions. Journal editors can uti- lize this research to adopt endorsement of the CONSORT state- ment as a way to improve the overall quality of reporting of pilot trials or trials in general. Declaration of Conflicting Interests The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The authors received no financial support for the research, authorship, and/or publication of this article. Supplemental Material Supplementary material for this article is available online. References Arain, M., Campbell, M. J., Cooper, C. L., & Lancaster, G. A. (2010). What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Medical Research Method- ology, 10, 67. Begg, C., Cho, M., Eastwood, S., Horton, R., Moher, D., Olkin, I., . . . Simel, D. (1996). Improving the quality of reporting of ran- domized controlled trials: The CONSORT statement. Journal of the American Medical Association, 276, 637–639. Bourgeois, F. T., Murthy, S., & Mandl, K. D. (2012). Comparative effectiveness research: An empirical study of trials registered in ClinicalTrials. gov. PLoS One, 7, e28820. Cairo, F., Sanz, I., Matesanz, P., Nieri, M., & Pagliaro, U. (2012). Quality of reporting of randomized clinical trials in implant den- tistry. A systematic review on critical aspects in design, outcome assessment and clinical relevance. Journal of Clinical Periodon- tology, 39, 81–107. Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., & Petti- crew, M. (2008). Developing and evaluating complex interven- tions: The new Medical Research Council guidance. British Medical Journal, 337, a1655. Des Jarlais, D. C., Lyles, C., & Crepaz, N. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361–366. Eldridge, S. M., Chan, C. L., Campbell, M. J., Bond, C. M., Hopewell, S., Thabane, L., & Lancaster, G. A. (2016a). CONSORT 2010 statement: Extension to randomised pilot and feasibility trials. Pilot and Feasibility Studies, 2, 64. Eldridge, S. M., Chan, C. L., Campbell, M. J., Bond, C. M., Hopewell, S., Thabane, L., . . . Tugwell, P. (2016b). CONSORT 2010 state- ment: Extension to randomised pilot and feasibility trials. British Medical Journal (Online), 355, i5239. Higgins, J. P. T., Altman, D. G., Gøtzsche, P. C., Jüni, P., Moher, D., Oxman, A. D., . . . Sterne, J. A. C. (2011). The Cochrane Colla- boration’s tool for assessing risk of bias in randomised trials. Brit- ish Medical Journal, 343, d5928. Liu, L. Q., Morris, P. J., & Pengel, L. H. M. (2013). Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: A 3-year overview. Transplant Interna- tional, 26, 300–306. Mbuagbaw, L., Thabane, M., Vanniyasingam, T., Debono, V. B., Kosa, S., Zhang, S., . . . Thabane, L. (2014). Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: A systematic review. Contemporary Clin- ical Trials, 38, 245–250. Moher, D., Jones, A., Lepage, L., & Consort, G. (2001). Use of the CONSORT statement and quality of reports of randomized trials: A comparative before-and-after evaluation. Journal of the Amer- ican Medical Association, 285, 1992–1995. Moher, D., Schulz, K. F., Altman, D. G., & Consort, G. (2001). The CONSORT statement: Revised recommendations for improving the Bhatt et al. 583 quality of reports of parallel-group randomised trials. St. Louis, MO: Elsevier. National Institutes of Health. (2004). Notice of revised NIH definition of “clinical trial”. Retrieved from http://grants. nih. gov/grants/ guide/notice-files/NOT-OD-15-015. html Samaan, Z., Mbuagbaw, L., Kosa, D., Borg Debono, V., Dillenburg, R., Zhang, S., . . . Thabane, L. (2013). A systematic scoping review of adherence to reporting guidelines in health care literature. Jour- nal of Multidisciplinary Healthcare, 6, 169–188. Thabane, L., Ma, J., Chu, R., Cheng, J., Ismaila, A., Rios, L. P., . . . Goldsmith, C. H. (2010). A tutorial on pilot studies: The what, why and how. BMC Medical Research Methodology, 10, 1. Van Teijlingen, E., & Hundley, V. (2010). The importance of pilot studies. Social Research Update, 35, 49–59. Whitehead, A. L., Sully, B. G., & Campbell, M. J. (2014). Pilot and feasibility studies: Is there a difference from each other and from a randomised controlled trial? Contemporary Clinical Trials, 38, 130–133. 584 Research on Social Work Practice 28(5) http://grants. nih. gov/grants/guide/notice-files/NOT-OD-15-015. html http://grants. nih. gov/grants/guide/notice-files/NOT-OD-15-015. html << /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Warning /CompatibilityLevel 1.3 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages false /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.1000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 1 /ParseDSCComments true /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo true /PreserveFlatness false /PreserveHalftoneInfo false /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Apply /UCRandBGInfo /Remove /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true ] /NeverEmbed [ true ] /AntiAliasColorImages false /CropColorImages false /ColorImageMinResolution 266 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Average /ColorImageResolution 175 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 1.50286 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.40 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasGrayImages false /CropGrayImages false /GrayImageMinResolution 266 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Average /GrayImageResolution 175 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50286 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.40 /HSamples [1 1 1 1] /VSamples [1 1 1 1] >> /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] >> /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 30 >> /AntiAliasMonoImages false /CropMonoImages false /MonoImageMinResolution 900 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Average /MonoImageResolution 175 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50286 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 >> /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox false /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (U.S. Web Coated \050SWOP\051 v2) /PDFXOutputConditionIdentifier (CGATS TR 001) /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org) /PDFXTrapped /Unknown /CreateJDFFile false /Description << /ENU >> /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ << /AsReaderSpreads false /CropImagesToFrames true /ErrorControl /WarnAndContinue /FlattenerIgnoreSpreadOverrides false /IncludeGuidesGrids false /IncludeNonPrinting false /IncludeSlug false /Namespace [ (Adobe) (InDesign) (4.0) ] /OmitPlacedBitmaps false /OmitPlacedEPS false /OmitPlacedPDF false /SimulateOverprint /Legacy >> << /AllowImageBreaks true /AllowTableBreaks true /ExpandPage false /HonorBaseURL true /HonorRolloverEffect false /IgnoreHTMLPageBreaks false /IncludeHeaderFooter false /MarginOffset [ 0 0 0 0 ] /MetadataAuthor () /MetadataKeywords () /MetadataSubject () /MetadataTitle () /MetricPageSize [ 0 0 ] /MetricUnit /inch /MobileCompatible 0 /Namespace [ (Adobe) (GoLive) (8.0) ] /OpenZoomToHTMLFontSize false /PageOrientation /Portrait /RemoveBackground false /ShrinkContent true /TreatColorsAs /MainMonitorColors /UseEmbeddedProfiles false /UseHTMLTitleAsMetadata true >> << /AddBleedMarks false /AddColorBars false /AddCropMarks false /AddPageInfo false /AddRegMarks false /BleedOffset [ 9 9 9 9 ] /ConvertColors /ConvertToRGB /DestinationProfileName (sRGB IEC61966-2.1) /DestinationProfileSelector /UseName /Downsample16BitImages true /FlattenerPreset << /ClipComplexRegions true /ConvertStrokesToOutlines false /ConvertTextToOutlines false /GradientResolution 300 /LineArtTextResolution 1200 /PresetName ([High Resolution]) /PresetSelector /HighResolution /RasterVectorBalance 1 >> /FormElements true /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MarksOffset 9 /MarksWeight 0.125000 /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PageMarksFile /RomanDefault /PreserveEditing true /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ] /SyntheticBoldness 1.000000 >> setdistillerparams << /HWResolution [288 288] /PageSize [612.000 792.000] >> setpagedevice