Observational studies and pragmatic trials can complement classical randomized controlled trials (RCTs) by providing data more relevant to the circumstances under which medicine is routinely practiced, thereby providing practical guidance for clinicians. The bearing of RCT findings on day-to-day practice can be weighted and the data more meaningfully interpreted by practicing clinicians if evidence is integrated from a variety of different study designs and methodologies. The advent of observational studies and pragmatic trials, often referred to as “real-life studies,” has met with a degree of cynicism, but their role and value is gaining widespread recognition and support among clinicians. This article discusses where observational studies and pragmatic trials have utility, namely: in addressing clinical questions that are unanswered and/or unanswerable by RCTs; in testing new hypotheses and possible license extensions; and in helping to differentiate between available therapies for a given indication. Moreover, it seeks to highlight how the different approaches fit within a conceptual framework of evidence relevant to clinical practice, a step-change in the traditional view of medical evidence.
Some traditionalists view the growing field of effectiveness (as opposed to efficacy) research as a revolution. Efficacy trials have long been the backbone of evidence-based research, and seek to optimize all conditions by using highly selected patient populations and close clinical monitoring to optimize internal validity and assess (as far as is feasible) true cause and effect between an intervention and an outcome. In comparison, effectiveness studies have been described as a way of “letting the rats out of the cage and seeing what happens in real life.” Such studies seek to evaluate how interventions work in the diversity of patients treated in routine care when managed in clinical scenarios that differ widely within and between countries, and they seldom (if ever) reflect the highly interventional nature of efficacy trials.
Effectiveness studies are often described as “real-life” or “real-world” in recognition of their attempt to mirror real patients and practice more closely than is typical of classical randomized controlled trials (RCTs). They typically fall into two classifications: (1) observational studies using clinical, claims, and/or administrative databases, and (2) pragmatic trials, which differ from RCTs by using more generalizable inclusion criteria and/or implementable management approaches. The importance and value of such studies, although often mistrusted in the past, are gaining widespread recognition and support among clinicians and commissioners. Indeed, effectiveness and comparative effectiveness evaluations are increasingly being used to differentiate between available treatments and to guide drug access decisions (1).
One of the major drivers of the field is concern about the undue weight and supremacy of evidence that has long been attributed to RCTs. Although RCTs are a necessary component of drug licensing, and the gold standard in evaluating short-term efficacy and safety of emerging therapies, their strict design can leave practicing clinicians questioning the relevance of the findings for the wide range of patients managed in routine practice.
The criticism is not with RCT methodologies, but rather with the weighting given to evidence from different sources and how it is integrated into practical advice. One has only to attempt to read the stylized form of recommendations of guidelines written according to the requirements of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach (2) to realize how far this differs from the language and needs of practicing clinicians. This top-down approach contrasts starkly with the bottom-up approach of early consensus guidelines that addressed practice from the perspective of the clinician. Thus, there is a need for evolution: there is a need to integrate evidence from all sources to arrive at treatment recommendations. There is a need to recognize the value of evidence from a diversity of complementary approaches that, together, make good each other’s methodology deficiencies and that better accommodate the diverse needs and circumstances under which medicine is practiced. There is a need for guidance when n = 1.
This article discusses the respective roles (and limitations) of RCTs, pragmatic trials, and observational studies, and proposes scenarios where observational studies and pragmatic trials can complement the RCT evidence base—in addressing clinical questions that are unanswered and/or unanswerable by RCTs, in testing new hypotheses and possible license extensions, in helping to differentiate between available therapies for a given indication, and in helping to evaluate the cost effectiveness of interventions when implemented in different care settings. Moreover, it seeks to highlight the different approaches and to present them within a conceptual framework of evidence relevant to clinical practice.
RCTs are designed to answer precise questions about the efficacy of various types of medical interventions and to gather useful information about treatment-related adverse events. RCTs minimize all potential confounders and optimize internal validity by selecting an idealized, “pure” patient population, and by using close patient monitoring, consistently across all trial subjects. Their rigorous design allows them to provide a confident answer to the question: “Does intervention X work in an ideal (and specific subgroup of) patients receiving best standards of care?”
However, such strong internal validity comes at the expense of external validity. By excluding any patients with characteristics that could affect the efficacy signal of an intervention, and by managing and monitoring patients far more intensively than would be feasible in clinical practice, RCT findings are limited in their generalizability. The poor external validity of RCTs is a particular concern for long-term chronic conditions that affect broad and heterogeneous patient populations, such as those with asthma and chronic obstructive pulmonary disease (COPD), and conditions that are often complicated by comorbidities.
Patient recruitment to asthma RCTs, for example, often requires patients to be nonsmokers, to have no (or negligible) comorbid illnesses or concurrent medications, and to have good inhaler technique and high adherence to study therapies. Moreover, patients are frequently required to have a clear-cut asthma diagnosis, some degree of lung function impairment, substantial reversibility to short-acting β-agonists, and frequent rescue medication usage. There is also the unavoidable issue that patient participation in clinical trials often results in a better knowledge of their disease and more efficient health behaviors than those of the “average” patient with asthma. The unrepresentative nature of asthma RCTs was highlighted by a Norwegian study that set out to evaluate what percentage of routine patients with asthma would be eligible for a typical asthma RCT (3). After application of a number of standard clinical and lifestyle RCT inclusion criteria, only 1.2% of the usual care asthma population would have been eligible. Similarly, in New Zealand, Travers and colleagues (4) used a combination of respiratory questionnaires and pulmonary function tests to estimate the proportion of patients with COPD who would have been eligible for inclusion in major RCTs. Of 117 patients in the community with COPD, a median of 5% met the major RCT inclusion criteria. Indeed, over 90% of the patients who were receiving COPD medication would not have been eligible for the registration trials of those therapies.
In contrast to classical RCTs, which sacrifice generalizability in favor of maximizing internal validity, pragmatic trials compare interventions under more usual clinical circumstances to improve the applicability of findings to real-life issues and everyday clinical decision making. Instead of including only highly selected patient populations and requiring frequent patient monitoring (at a level often impracticable, infeasible, or unaffordable in routine care), pragmatic trials aim to assess outcomes of healthcare interventions in the context of real-life clinical practice. They are designed to include heterogeneous patient populations and/or to incorporate relevant levels of clinical care to help answer practical clinical questions for healthcare providers, patients, and policymakers. Moreover, they aim to use outcomes that are relevant to interested parties (e.g., patient-oriented measures) and to evaluate the effect of interventions over a more appropriate period of time than is often feasible in RCTs.
However, without the close monitoring involved in RCTs, pragmatic trials can face challenges in maintaining adequate patient follow-up. Moreover, although they are designed to be less interventional than RCTs, any level of monitoring or patient engagement that pragmatic trials introduce in excess of usual care (even the knowledge of being studied or observed) can alter behavior and potentially eliminate differences between two trial interventions being tested (5). Another challenge of pragmatic trial design is detecting what is often a small difference in treatment effect between two interventions under conditions of usual clinical care; this requires either a large study population or use of a validated survey instrument that is very sensitive to the treatment effect.
Observational studies can be cohort, case–control, or cross-sectional in design. By recording data about a prescribed intervention without altering or influencing the normal patient–physician interaction, and by capturing data across the wide range of patients treated in routine care, they can provide valuable data on how management approaches are used, and the results associated with their use, in the real world.
Observational study data can be collected prospectively or drawn from retrospective datasets, from sources such as electronic medical records, anonymized medical research, pharmacy, or administrative claims databases. The routine collection of such data means that they are much greater in extent than predefined RCT datasets, and can often be obtained more quickly and at a lower cost (although rigorously conducted observational studies are not inexpensive).
Although observational studies can detect strong associations between test interventions and predefined outcomes that are generalizable to a broad patient population, they lack internal validity and are limited in the extent to which they can demonstrate an unequivocal cause-and-effect relationship. Patients included in observational studies are not randomly assigned to interventions, and patients and healthcare providers are not blinded so that they meet with concerns around potential confounding by indication or severity introduced by the presence of confounders that can never be totally controlled for. Moreover, missing data in observational studies can limit the interpretation of findings.
However, the validity of observational studies can be strengthened by identifying (and preregistering) the eligible population, design (e.g., matched cohort), outcomes, and potential confounding factors before work commences. Rigorous analytic methods that can also be used to reduce the possibility of bias or confounding include propensity score matching or matched cohort analyses using key patient and disease-related characteristics, and applying statistical adjustments for residual confounding factors.
The design differences between RCTs, pragmatic trials, and observational studies mean that they each have unique utility and distinct limitations. The hypothesis of this article is that, by integrating data from different study designs, it is possible to achieve a fuller picture of the true effectiveness, cost effectiveness, and safety of available healthcare interventions. It is acknowledged that RCTs are the gold standard for assessing the efficacy of interventions, but it is important to understand their limitations—which questions are inherently infeasible, or that are prohibitively expensive, to address in the RCT setting—and where other types of studies can provide important, complementary evidence.
To secure a license, RCTs are required to demonstrate noninferiority of the new intervention to a currently available option, and to be compared with the “gold standard” (whether or not that gold standard reflects usual care). The use of a noninferiority design can cast doubt on the interpretation of a trial’s findings. Although a superiority trial aims to prove that there is a clinically relevant difference between interventions, a noninferiority trial seeks to determine whether a new intervention is “no worse” than a reference intervention within a prespecified noninferiority margin (from –Δ to 0), not necessarily a clinically irrelevant margin. Although a properly designed and conducted superiority trial (if successful in showing a difference) is easy to interpret without further assumptions, noninferiority trials pose greater interpretation challenges. However, their use is necessary in circumstances where it would be unethical to use a placebo, no treatment control, or a very low dose of an active drug, because there is an effective treatment available that provides an important benefit to patients. Few registration trials are designed to assess superiority of a new intervention against the market leader, leaving clinicians with little evidence on which to differentiate between a number of available interventions (all of which have proven noninferiority to a common gold standard). By consistent application of historical precedent, any such biases can be echoed through successive generations of RCTs and reflected in evidence-based guidelines.
Due to their recruitment of highly selected, idealized populations and use of close patient monitoring, RCTs are not designed to answer questions about the effectiveness of interventions when used in less controlled clinical management settings or in patient subgroups of potential clinical interest (e.g., smokers or patients with comorbidities). They are also limited in their ability to evaluate the long-term safety of interventions and rare treatment-related events. One of the primary reasons for this is the high cost of registration trials, which often means that only relatively short-term evaluations of new interventions (in highly selected populations) are financially feasible and viable. Such trials offer little insight into the acceptability to the patient, or feasibility of implementing new interventions in practice.
Administrative delays and medical ethics can also pose obstacles to conducting RCTs, as was demonstrated by the Management of Asthma in School Age Children on Therapy (MASCOT) trial. MASCOT was an RCT funded by the U.K. Health Technology Appraisal (HTA) Committee, and was designed to compare different step-up options in pediatric asthma. The trial met with numerous delays in its planning: drawn-out approvals from pharmaceutical companies supplying the trial therapies; complications around funding of excess treatment costs; slow appointment of a trial coordinator; and lengthy protocol rewrites to convert the HTA protocol to European standards. Finally, by the time patient recruitment began, it was apparent that almost all children being managed at the research centers were already on optimized therapy. As it would have been unethical to randomize eligible patients to alternative treatment strategies, recruitment was not feasible, and the trial had to be abandoned (6).
The key to addressing the “gaps” in the RCT evidence base is not to try to design RCTs to answer every question about an intervention—they intrinsically cannot. Instead, it is to understand the essence of the question being asked, to understand the range of study designs available (e.g., RCTs, pragmatic trials, and observational studies), and to select the appropriate study design(s) to answer the question at hand. By drawing on a diversity of different study designs and analytical approaches, a fuller picture of the utility of an intervention can be established.
Some (although not exhaustive) examples of scenarios where observational studies and pragmatic trials can answer questions that are inappropriate for an RCT are discussed here.
Observational studies and pragmatic trials reflect (or can be designed to reflect) the level of physician/clinician interaction typical of routine care. As such, they can capture patient activity (e.g., consultation patterns, medication adherence) in a way that is unachievable in RCTs. They can also record patterns of care, identify where routine practice appears to differ from guideline recommendations, and highlight areas where there may be guideline implementation challenges. Observing patient behavior in the absence of close RCT monitoring can provide important insights into patients’ experiences of their disease and its management. Indeed, data collected outside the rigors of an RCT better reflect patient behavior and preferences, and the analysis of routine datasets may help to guide management approaches that are easier to implement in practice than those based on RCT strategies.
The potential importance of patient preference was highlighted by a pragmatic trial funded by the UK HTA. The pragmatic single-blind trial and health economic evaluation of leukotriene receptor antagonists (LTRAs) in primary care at steps 2 and 3 of the national asthma guidelines (ELEVATE) found LTRAs to be equivalent to inhaled corticosteroids (ICSs) at Global Initiative for Asthma (GINA) step 2 and to add-on long-acting β-agonists (LABAs) at GINA step 3 (7, 8). The primary outcome of the trial was a patient-focused quality-of-life outcome, the Mini Asthma Quality of Life Questionnaire at 2 months. No significant differences in Mini Asthma Quality of Life Questionnaire were reported between LTRAs and ICSs (at GINA step 2) or add-on LTRAs and add-on LABAs (at GINA step 3) after 2 months of treatment. The median rate of adherence to therapy was higher for LTRAs than for the ICS comparator arm (65% vs. 41%; P = 0.11) and for the add-on LABA comparator arm (74% vs. 46%; P = 0.007). These data may suggest a patient preference for oral rather than inhaled therapy, which is something that, although recognized within some cultural and religious groups, may be overlooked in more routine prescribing scenarios.
Patient dosing regimen preference (and related outcomes) has also been explored in a recent observational asthma study conducted by some of the authors of this article (9). The study pooled clinical data from two UK primary care databases—the General Practice Research Database (now the Clinical Practice Research Datalink ) and the Optimum Patient Care Research Database (11). Two cohorts were evaluated—an ICS cohort (n = 26,834) and an ICS/LABA cohort (n = 20,814). The ICS cohort included patients receiving twice-daily (BD) ICSs who stepped down ICS dose (≥50% decrease) and either continued on a BD regimen or switched to a once-daily (QD) regimen. The ICS/LABA cohort included patients receiving BD ICSs/LABAs who stepped down ICS/LABA dose and either continued on a BD regimen or switched to a QD regimen. Significant improvements in most endpoints were recorded during the year after step down (compared with the prior year) for both ICS and ICS/LABA cohorts, irrespective of dosing regimen. However, for both the ICS and the ICS/LABA cohorts, the greatest improvements were seen for patients stepping down to QD dosing. Adherence also improved significantly for all patients after therapy step down, but, again, most markedly for the QD cohorts with the (counterintuitive) result that the mean consumed ICS dose actually increased when patients stepped down from BD to QD therapy. The authors concluded that stepping down therapy is a valid management option, and may improve asthma-related outcomes (some of which may result from increased adherence), and that understanding patient dosing preferences may help prescribers to better individualize therapy.
From an ethical standpoint, observational studies offer a way to address interesting and important clinical questions that are ethically unevaluable or challenging in the RCT setting. Having abandoned attempts to evaluate pediatric asthma step-up options due (in part) to ethical recruitment challenges, some of the MASCOT investigators are now collaborating on an observational study (funded by the Respiratory Effectiveness Group) of similar design. Pooling data from the UK Optimum Patient Care Research Database and the Clinical Practice Research Datalink, the study will investigate the effect of therapy step up (add-on LABAs, add-on LTRAs, or ICS dose increase) and the effect of a change in inhaler type (i.e., dry powder inhaler [DPI] to pressurized, metered-dose inhaler [pMDI]; pMDI to DPI) on asthma control. In contrast to the RCT design, this study is feasible, because records for the majority of patients within the datasets began before their therapy was optimized.
The design of asthma RCTs typically calls for frequent patient monitoring (lung function testing and inhaler training), which cannot easily be duplicated in everyday practice. In routine care, time and equipment limitations are operative, and clinic visit frequency, treatment adherence, and inhaler technique are often suboptimal. These features of clinical practice can be captured in real-life studies. Moreover, many clinically important patient populations are not studied in RCTs, such as smokers and patients with “insufficient” bronchodilator reversibility, serious comorbidities, and adherence or other psychosocial problems (9, 12–25).
Studies with a more pragmatic approach to patient recruitment (pragmatic trials and/or observational studies) can help to explore whether RCT efficacy outcomes hold true across important patient subgroups. Indeed, they have already helped to extend the RCT evidence base for the subgroup of patients with asthma with rhinitis (who may benefit from systemic antileukotriene therapy) and smokers (whose response to ICSs may be impaired) (7, 26). They are also starting to provide additional insights into the potential role of inhaler device on treatment outcomes.
In contrast to guideline statements that inhaler device type has no apparent effect on treatment-related outcomes (8, 27–31), results of large observational cohort studies suggest otherwise. A study conducted by Price and colleagues (32) in patients initiating or stepping up ICS dose across a range of monotherapies, found that breath-actuated inhalers and DPIs were associated with better outcomes than pMDIs. Observational study data from Price and colleagues (33) also suggest that combination ICS/LABA treatment outcomes may be affected by the type of delivery device prescribed. Again, using the UK General Practice Research Database, outcomes were compared in patients receiving combined fluticasone–salmeterol therapy (n = 3,134) via pMDI or DPI. The patients on pMDI were found to have a greater likelihood of achieving asthma control over 1 year than those using a DPI (33). The acceptance of, and satisfaction with, different delivery devices will also vary across countries and different cultural and ethnic groups.
Use of the term “real-life” to refer to observational studies and pragmatic trials implies that RCTs are not real-life. This dichotomy of the evidence base—with RCTs and efficacy at one end and real-life studies and effectiveness at the other—is neither accurate nor helpful.
Although studies can differ in terms of their internal/external validity and in the quality and robustness of their design, all in vivo studies deal with real people, regardless of how study populations are recruited (from primary care or via artificial/controlled selection processes) or managed (by real clinicians or “research factory” staff). It is important when conducting and reporting on all studies to address where they fit within the existing medical evidence, and to use clear and appropriate study descriptors to aid in interpretation and application of results.
The Respiratory Effectiveness Group’s Standards Committee has pioneered some work in this area, proposing that all studies—RCTs, pragmatic trials, observational studies, and more—can be classified relative to each other within a two-dimensional integrated space bound by a population axis and an ecology-of-care (or clinical management) axis (see Figure 1) (34). The ecology-of-care axis categorizes studies along a continuous scale, from highly interventional classical RCT management with rigorous follow-up at one end to usual care at the other; the population axis runs continuously from a “highly selected” population through to a “managed care” population (i.e., managed as having the condition irrespective of diagnostic status). This integrated evidence framework dispels the real-life versus non–real-life dichotomy of the evidence base. It recognizes that a pragmatic trial can reflect real life in terms of its patient inclusion, but not its ecology of care; or it might accurately replicate routine care, but only in a highly selected patient population, or it might reflect real life in both ecology of care and patient selection. Similarly, observational studies may be entirely noninterventional and accurately capture aspects of routine care, but they do not always include broad practice populations; they too can be designed to selectively include only a highly characterized patient population.
One of the key values of the proposed framework is its creation of a standardized two-element description (ecology of care and patient selection) for all types of studies (e.g., one of “highly controlled,” “pragmatically controlled,” or “observational” [for ecology of care], and one of “confirmed, pure diagnosis,” “clinical diagnosis,” or “managed as ‘X’ population” [for patient selection]).
The framework’s integrated view of the evidence base helps to illustrate the relative and complementary nature of different studies. Pragmatic trials and observational studies either seek to validate RCT findings in non-RCT populations and/or management settings, or to address questions unanswerable by the classical RCT design. As such, their findings can supplement those from RCTs and help to qualify guideline recommendations.
For example, current smokers (or patients with a history of ≥10 pack years) are consistently excluded from asthma RCTs. Indeed the smoking exclusion criteria grow increasingly tight (and less real life) as designers of RCTs attempt to demonstrate greater control. Largely basing its recommendations on RCT data, GINA recommends patients with asthma be initiated on low-dose ICSs, and that the dose be increased until optimum control is achieved (subsequent step down is an option if control is achievable at a lower dose) (35). The strategy makes no allowances for different patient subgroups, yet it is known that the effect of mild ICSs is diminished and may offer little if any efficacy in smokers (16, 36). By drawing on data from observational studies and pragmatic trials that included patients with asthma who smoke, an addendum to the guidance could suggest that physicians may want to consider a higher starting dose of ICSs in cigarette smokers, and that LTRAs may offer an alternative to high-dose ICSs for the management of current cigarette smokers (7, 37).
Confidence in integrating evidence from different types of studies requires confidence in the quality of the data each contributes. Irrespective of its design (observational, pragmatic, or tightly-controlled), the quality of any study’s findings is dependent on the quality of data that it draws on, on the rigor of the analysis methods used, and on the extent to which the results answer the initial research question. The focus of many concerns in this area is on the lack of clear quality standards for observational studies; yet experts currently leading real-life research efforts are increasingly working together to align, document, disseminate, and use methods for observational studies that will become consistent standards. Establishing and formalizing quality standards for data collection, as is the focus of the Respiratory Effectiveness Group (REG) and the International Primary Care Respiratory Group’s (IPCRG's) Uncovering and Noting Long-Term Outcomes in COPD to enhance Knowledge Committee (UNLOCK), can help to guide the quality of future observational research and the development of emerging datasets (38). Likewise, validation of outcome measures and publications of standardized nomenclature and quality standards are required to help with benchmarking and refinement of research methods to address concerns around the quality of real-life data and see them better integrated into guideline recommendations. These issues are further developed by Roche and colleagues (pp. S99–S104) in the article that follows (39).
Classical RCTs unequivocally form the backbone of the licensing and registration of new interventions. They answer critical questions about short-term drug safety and the effectiveness of a therapy when used in the ideal patient in the optimum healthcare setting, but they do not answer all the questions a clinician may face when making decisions in routine practice. On the other hand, pragmatic trials and observational studies lack the internal validity of a registration RCT, but they do shine light on important aspects of patient care that are not addressed or are simply unevaluable using a classical RCT design.
Pragmatic trials and observational studies also benefit from being less costly, allowing them to address longer-term aspects of care and to test hypotheses (including hypothesized license extensions) that would be otherwise unaffordable. Convention has been for RCTs to occupy the preregistration space and pragmatic trials and observational studies that of postregistration. However, the greater affordability of observational studies (and, to some degree, pragmatic trials) may see this order being reversed, and observational studies and pragmatic trials being used to test a variety of hypotheses to inform the direction of future RCT expenditure (Figure 2).
The era of the evidence hierarchy is being redefined. Different study designs should no longer be ranked in vertical pyramids or pitted against each other at opposite ends of the quality spectrum. If underpinned by clear quality standards and allowed to evolve, real-life studies will present increasing opportunities to ask more sophisticated questions about therapies and to evaluate complex medical interventions. Different study designs should be called on—as appropriate—to answer clinical questions. Devising frameworks that unite (rather than segregate) different streams of research, and establishing standards to appraise the quality and improve confidence in different types of research, are important steps toward achieving a more integrated approach to evidence reviews.
|1 .||Conway PH, Clancy C. Comparative-effectiveness research—implications of the Federal Coordinating Council’s report. N Engl J Med 2009;361:328–330.|
|2 .||GRADE Working Group. Grading of recommendations assessment, development and evaluation (GRADE) [Internet] [accessed 2013 Aug 15]. Available from: http://www.gradeworkinggroup.org/|
|3 .||Herland K, Akselsen JP, Skjønsberg OH, Bjermer L. How representative are clinical study patients with asthma or COPD for a larger “real life” population of patients with obstructive lung disease? Respir Med 2005;99:11–19.|
|4 .||Travers J, Marsh S, Caldwell B, Williams M, Aldington S, Weatherall M, Shirtcliffe P, Beasley R. External validity of randomized controlled trials in COPD. Respir Med 2007;101:1313–1320.|
|5 .||Konstantinou GN. Pragmatic trials: how to adjust for the ‘Hawthorne effect’? Thorax 2012;67:562. Author reply p. 562.|
|6 .||Lenney W, Perry S, Price D. Clinical trials and tribulations: the MASCOT study. Thorax 2011;66:457–458.|
|7 .||Price D, Musgrave SD, Shepstone L, Hillyer EV, Sims EJ, Gilbert RF, Juniper EF, Ayres JG, Kemp L, Blyth A, et al. Leukotriene antagonists as first-line or add-on asthma-controller therapy. N Engl J Med 2011;364:1695–1707.|
|8 .||Global Initiative for Asthma. GINA report, global strategy for asthma management and prevention, updated 2012 [Internet] [accessed 2013 Aug 15]. Available from: www.ginasthma.org|
|9 .||Price D, Chisholm A, Hillyer EV, Burden A, Von Ziegenweidt J, Svedsater H, Dale P. Effect of inhaled corticosteroid therapy step-down and dosing regimen on measures of asthma control. J Aller Ther 2013;4:1–8.|
|10 .||Clinical Practice Research Datalink [Internet] [accessed 2013 Aug 15]. Available from: http://www.cprd.com/home/.|
|11 .||Optimum Patient Care Research Database (OPCRD) [Internet] [accessed 2013 Aug 15]. Available from: http://www.optimumpatientcare.org/Html_Docs/OPCRD.html|
|12 .||Smith JR, Noble MJ, Musgrave S, Murdoch J, Price GM, Barton GR, Windley J, Holland R, Harrison BD, Howe A, et al. The At-Risk Registers In Severe Asthma (ARRISA) study: a cluster-randomised controlled trial examining effectiveness and costs in primary care. Thorax 2012;67:1052–1060.|
|13 .||Price D, Hillyer EV, van der Molen T. Efficacy versus effectiveness trials: informing guidelines for asthma management. Curr Opin Allergy Clin Immunol 2013;13:50–57.|
|14 .||Thomas M, Price D. Impact of comorbidities on asthma. Expert Rev Clin Immunol 2008;4:731–742.|
|15 .||Clatworthy J, Price D, Ryan D, Haughney J, Horne R. The value of self-report assessment of adherence, rhinitis and smoking in relation to asthma control. Prim Care Respir J 2009;18:300–305.|
|16 .||Chalmers GW, Macleod KJ, Little SA, Thomson LJ, McSharry CP, Thomson NC. Influence of cigarette smoking on inhaled corticosteroid treatment in mild asthma. Thorax 2002;57:226–230.|
|17 .||Chaudhuri R, Livingston E, McMahon AD, Thomson L, Borland W, Thomson NC. Cigarette smoking impairs the therapeutic response to oral corticosteroids in chronic asthma. Am J Respir Crit Care Med 2003;168:1308–1311.|
|18 .||Gallefoss F, Bakke PS. Does smoking affect the outcome of patient education and self-management in asthmatics? Patient Educ Couns 2003;49:91–97.|
|19 .||Althuis MD, Sexton M, Prybylski D. Cigarette smoking and asthma symptom severity among adult asthmatics. J Asthma 1999;36:257–264.|
|20 .||Hakala K, Stenius-Aarniala B, Sovijärvi A. Effects of weight loss on peak flow variability, airways obstruction, and lung volumes in obese patients with asthma. Chest 2000;118:1315–1321.|
|21 .||Saint-Pierre P, Bourdin A, Chanez P, Daures JP, Godard P. Are overweight asthmatics more difficult to control? Allergy 2006;61:79–84.|
|22 .||Lavoie KL, Bacon SL, Labrecque M, Cartier A, Ditto B. Higher BMI is associated with worse asthma control and quality of life but not asthma severity. Respir Med 2006;100:648–657.|
|23 .||Molimard M, Raherison C, Lignot S, Depont F, Abouelfath A, Moore N. Assessment of handling of inhaler devices in real life: an observational study in 3811 patients in primary care. J Aerosol Med 2003;16:249–254.|
|24 .||Giraud V, Roche N. Misuse of corticosteroid metered-dose inhaler is associated with decreased asthma stability. Eur Respir J 2002;19:246–251.|
|25 .||Barnes PJ, Ito K, Adcock IM. Corticosteroid resistance in chronic obstructive pulmonary disease: inactivation of histone deacetylase. Lancet 2004;363:731–733.|
|26 .||Dahlén SE, Dahlén B, Drazen JM. Asthma treatment guidelines meet the real world. N Engl J Med 2011;364:1769–1770.|
|27 .||Papi A, Haughney J, Virchow JC, Roche N, Palkonen S, Price D. Inhaler devices for asthma: a call for action in a neglected field. Eur Respir J 2011;37:982–985.|
|28 .||Dolovich MB, Ahrens RC, Hess DR, Anderson P, Dhand R, Rau JL, Smaldone GC, Guyatt G; American College of Chest Physicians; American College of Asthma, Allergy, and Immunology. Device selection and outcomes of aerosol therapy: evidence-based guidelines: American College of Chest Physicians/American College of Asthma, Allergy, and Immunology. Chest 2005;127:335–371.|
|29 .||Brocklebank D, Ram F, Wright J, Barry P, Cates C, Davies L, Douglas G, Muers M, Smith D, White J. Comparison of the effectiveness of inhaler devices in asthma and chronic obstructive airways disease: a systematic review of the literature. Health Technol Assess 2001;5:1–149.|
|30 .||British Thoracic Society (BTS), Scottish Intercollegiate Guidelines Network (SIGN). British guideline on the management of asthma [Internet]. May 2008 [accessed 2013 Aug 15]. Available from: http://www.sign.ac.uk/guidelines/fulltext/101/index.html|
|31 .||National Asthma Education and Prevention Program. Expert Panel Report 3: Guidelines for the diagnosis and management of asthma [Internet]. 2007 [accessed 2013 Aug 15]. Available from: http://www.nhlbi.nih.gov/guidelines/asthma/asthgdln.pdf|
|32 .||Price D, Haughney J, Sims E, Ali M, von Ziegenweidt J, Hillyer EV, Lee AJ, Chisholm A, Barnes N. Effectiveness of inhaler types for real-world asthma management: retrospective observational study using the GPRD. J Asthma Allergy 2011;4:37–47.|
|33 .||Price D, Roche N, Christian Virchow J, Burden A, Ali M, Chisholm A, Lee AJ, Hillyer EV, von Ziegenweidt J. Device type and real-world effectiveness of asthma combination therapy: an observational study. Respir Med 2011;105:1457–1466.|
|34 .||Roche N, Reddel HK, Agusti A, Batemand ED, Krishnan JA, Martin RJ, Papi A, Postma D, Thomas M, Brusselle G, et al. Integrating real-life studies in the global therapeutic research framework. Lancet Respir Med 2013;1:30–32.|
|35 .||Bateman ED, Hurd SS, Barnes PJ, Bousquet J, Drazen JM, FitzGerald M, Gibson P, Ohta K, O’Byrne P, Pedersen SE, et al. Global strategy for asthma management and prevention: GINA executive summary. Eur Respir J 2008;31:143–178.|
|36 .||Tomlinson JE, McMahon AD, Chaudhuri R, Thompson JM, Wood SF, Thomson NC. Efficacy of low and high dose inhaled corticosteroid in smokers versus non-smokers with mild asthma. Thorax 2005;60:282–287.|
|37 .||Price D, Popov TA, Bjermer L, Lu S, Petrovic R, Vandormael K, Mehta A, Strus JD, Polos PG, Philip G. Effect of montelukast for treatment of asthma in cigarette smokers. J Allergy Clin Immunol 2013;131:763–771.|
|38 .||Chavannes N, Ställberg B, Lisspers K, Roman M, Moran A, Langhammer A, Crockett A, Cave A, Williams S, Jones R, et al. UNLOCK: Uncovering and Noting Long-Term Outcomes in COPD to enhance Knowledge. Prim Care Respir J 2010;19:408.|
|39 .||Roche N, Reddel H, Martin R, Brusselle G, Papi A, Thomas M, Postma D, Thomas V, Rand C, Chisholm A, et al.; Respiratory Effectiveness Group. Quality standards for real-world research: focus on observational database studies of comparative effectiveness. Ann Am Thorac Soc 2014;11:S99–S104.|