Novel Ways to Get Good Trial Data: The UK Experience

By Tom MacDonald, Isla Mackenzie, Li Wei
May 18, 2012 | Discussion Paper

Introduction

Traditional randomised clinical trials are very expensive and time-consuming and often have poor external validity (Ware and Hamel, 2011). The challenge for modern medicine is to find ways of producing good-quality evidence with good external validity and to do so more efficiently, with less bureaucracy, more expeditiously, and above all less expensively than we have done in the past. To achieve these goals seems daunting at first but most of us work in organisations that already have amazing infrastructures that can be harnessed for research.

This paper will use the example of the United Kingdom (UK) National Health Service (NHS) but the methods are likely to be generalisable to other health care systems and health maintenance organisations.

The UK NHS is an organisation that has 61 million subjects about which everything is known (at least theoretically). These patients are treated from cradle to grave in a system that is free at the point of delivery. So, there are data on all drug treatments (both prescribed and dispensed), all physician visits, all comorbidities, all laboratory tests, all hospitalisations, and all certified causes of deaths. In addition, there are data about ancestors and offspring and a lot of other detail about habits, social deprivation, etc. When asked in surveys, the public appears to support the use of these data to inform about the effectiveness and safety of medicines (Mackenzie et al., 2012).

The NHS in Scotland has traditionally had good record-linkage abilities due to farsighted public health physicians in the 1960s (Kendrick, undated, 1997). This system has been utilised for observational research such as pharmacovigilance (Evans and MacDonald, 1999) and to develop clinical disease registers and managed clinical networks (Morris et al., 1997), but more recently record-linkage has been seen as an accurate way to track the outcomes of subjects randomised in clinical trials (Ford et al., 2007; West of Scotland Coronary Prevention Study Group, 1995).

 

Randomising Patients: Streamlined Studies

Randomising patients to different treatments within the NHS and using record-linkage to track outcomes is the concept behind the “streamlined study.” Such studies can be double-blind with subjects being provided with masked investigational medicinal products (IMPs) or, if better external validity is required, internal validity can be traded by using open designs. The Febuxostat versus Allopurinol Streamlined Trial (FAST) study, which is running in the United Kingdom and Denmark, is an example of a streamlined study in which IMPs are provided to patients by mail. Follow-up is by a composite of e-mail, phone calls, family doctors, and record linkage to hospitalisation and deaths. (For more information on the FAST study, visit http://www.ukctg.nihr.ac.uk/trialdetails/ISRCTN72443728) Blinding the end-points of this type of study (which are unlikely to be influenced by the patient’s knowledge of what they are taking) results in the Prospective, Randomised, Open, Blinded Endpoint or PROBE design (Hansson et al., 1992). Of course, the ultimate “randomised effectiveness” study simply randomises therapy, which is then prescribed and compared to normal care prescribing. One such study, the Standard Care versus Celecoxib Outcome Trial (SCOT), is currently running in the United Kingdom, Denmark, and the Netherlands. (For more information on the SCOT study, visit http://clinicaltrials.gov/ct2/show/NCT00447759) These “naturalistic” designs most closely mimic usual care and are designed to inform about the safety and effectiveness of medicines and can help policy makers decide whether these medicines should be reimbursed.

The use of this type of trial design used specifically for post-approval safety research has recently been reviewed (Reynolds et al., 2011). Only 13 studies of this type of design were identified in this review, so these studies do not have an extensive track record. However, the authors concluded that this type of design has demonstrated utility for comparative research of medicines and vaccines.

 

University/NHS Sponsorship

There have been calls for studies of medicines to be run independently from the pharmaceutical industry (Steinbrook and Kassirer, 2010). In Europe, the study sponsor is the legal entity responsible for the conduct of a study and is independent of the study funder. We believe that academic and/or NHS sponsorship is the best way to carry out independent research on medicines and this is the mechanism under which we carry out this research, such as the SCOT and FAST trials (MacDonald et al., 2010).

 

Pros and Cons of Streamlined Studies

Streamlined studies make use of information technology (IT) to assist with the identification and invitation of patients to participate. In the case of the NHS, individual primary care physician practices can be recruited and contracted. These practices then allow electronic searches of their practice records to identify subjects who meet the study entry criteria. A list of potentially suitable subjects is scrutinised by the primary care physicians to remove subjects who they deem unsuitable to be contacted. The final list is then used to send letters to these subjects (the text of which is preapproved by the ethics committee) on practice notepaper and signed by the patient’s doctor. A study patient information sheet is enclosed with this letter. Those patients who reply and who express willingness to be considered for study inclusion are then contacted by the study research nurse who takes consent, formally screens them using an electronic case report form, and randomises them using an interactive voice recognition system or online randomisation tool.

The benefit of using such a system is that large populations of patients who meet the inclusion and exclusion criteria for a study can be screened efficiently and invited to participate. Thus, in the SCOT study, for example, more than 630 family practices, representing a total population in excess of 4 million patients, have signed research contracts and had their records electronically searched to identify suitable study subjects. This is a highly efficient way to screen large populations.

The follow-up of subjects is also efficient as all hospitalisations and deaths are recorded centrally in the United Kingdom, so subjects can be efficiently tracked by record-linkage. Secure, study-specific web portals allow family physicians and study staff to report adverse events, adjust medication, track laboratory results, etc. NHS records of hospitalisations suspected of being endpoints can be retrieved, scanned into portable document format, redacted where necessary, and abstracted to forms to allow endpoint committee adjudication. Such adjudication is also done using remote secure systems that allow geographically diverse end-point committee members to interact efficiently.

These electronic systems allow efficiencies that reduce the overall costs of research. It seems to be even better in Denmark, where prescribing records are collated centrally along with hospitalisation and mortality data and where the population seems to welcome, and even expect, electronic records to be used for research purposes. However, these electronic systems do not solve all study problems.

 

Bureaucracy

The bureaucratic processes of obtaining multiple consents and approvals required to carry out clinical research are not diminished by such study designs (Duley et al., 2008; McMahon et al., 2009). This article is not the place to rehearse these issues. Suffice it to say that the UK Academy of Medical Sciences has produced a report for the UK government on this matter that has made a number of recommendations aimed at reducing this burden of bureaucracy (The Academy of Medical Sciences, 2011). (See also the Institute of Medicine discussion paper by Sir Michael Rawlins, Health Research as a Public Good, 2012.)

 

Engaging Patients in Research

A major hurdle facing streamlined studies (as well as other clinical studies) is how to engage patients in the health research agenda. While we can efficiently identify suitable subjects electronically based on eligibility criteria, for every 100 patients written to, our experience is that an average of only 14 subjects are randomised. The dominant reason for this is that most patients do not reply to letters of invitation written to them by their family physicians. In inner-city practices in the United Kingdom, 70 percent or more of subjects do not reply at all. In rural practices, we get more replies and more positive replies, perhaps because patients have closer relationships with their family doctors in the rural setting. However, while electronically searching family physician records provides a way of writing to large cohorts of suitable patients, it does not solve the problem of how to engage patients in research and enhance recruitment. Several reviews have addressed this issue (Caldwell et al., 2010; Treweek et al., 2010). Despite trying numerous initiatives, we have not yet found a good solution to this problem (See Box 1, p. 8, and Mackenzie et al., 2010). Cracking this difficult nut will require effort.

 

 

Paying Patients?

We have recently received ethical committee approval for a study to formally evaluate the effect of paying patients an incentive of £100 (about $150) to participate in clinical trials. This method has some evidence base (Halpern et al., 2004; Martinson et al., 2000) and appears to be common practice in the United States (Dickert et al., 2002). Perhaps this could provide a solution both to recruiting more subjects and to recruiting subjects who are more representative of the population at large.

 

Event Rates

Clinical outcome trials are designed to include subjects who are likely to experience outcome events. Even for a composite outcome such as hospitalisation for myocardial infarction, cerebrovascular accident, or vascular death, the expected event rate may only be 1 to 2 percent per year even in an “at-risk” older population. A problem that bedevils trialists is that the patients enrolled into studies often have event rates lower than expected. There are many potential explanations for this phenomenon but one explanation is that patients who respond positively to letters of invitation to participate are those who have an interest in their health and thus exhibit good heath behaviour and are therefore at lower risk of events. Subjects with poor health behaviour are less likely to participate. Clearly, trials of younger patients with few risk factors could not feasibly examine outcome events as these studies would have to be unfeasibly large or long, or both, to generate sufficient events. Because of this, streamlined studies have to limit recruitment to older subjects preferably with additional risk factors in order to limit the size and duration of the trial to reasonable parameters. As with all studies that restrict inclusion of subjects, the generalisability, or external validity, of such studies is reduced by such restrictions. However, even when entry is restricted to high-risk and older age groups, event rates can be low. The very elderly and the socially deprived are underrepresented in clinical trials. Part of the engagement of the public in research must be to target these groups and stress how important it is that they are included.

 

Post-Randomisation Control

A potential criticism of streamlined studies is that there is relatively little patient contact post-randomisation. While a lack of scheduled patient visits reduces dramatically the costs of follow-up, an argument can be made that it also results in a loss of post-randomisation “control” of the study. Usually, subjects in randomised trials are “encouraged” to persist with randomised medication until the end of the trial. In streamlined studies (by design), persistence with medication more closely resembles normal care. Thus, subjects may be more likely to switch randomised medication. The effect of this is that streamlined studies become more “observational” with time. For superiority studies where the primary analysis is the “intention to treat” population, switching of medication post-randomisation will dilute the observed efficacy. However, the result will be more informative of the likely effectiveness of such an intervention being introduced into normal care. Clearly, such clinical effectiveness is the metric that drives decisions about cost-effectiveness. For non-inferiority designs in which the primary outcome of interest is a perprotocol analysis, switching therapy post-randomisation results in subjects being censored at the point of switching with a resulting reduction in the person-years exposure to medication and thus reduced power of the study. Streamlined studies need to take this factor into account in the design stage and over-recruit subjects to compensate for this effect. In addition, these studies should prospectively plan to carry out an observational type of analysis by treatment taken at the time of an event as a supporting post hoc analysis. Such analyses need to be developed.

 

Alternatives to Randomising Patients Into Streamlined Studies

Whilst streamlined studies have many advantages, they are not a perfect solution to getting good data quickly and inexpensively. For this reason we have explored other methods of evaluating treatments.

 

Randomising Family Practice Prescribing or Cluster Randomisation

In the United Kingdom, family practices invariably adopt a limited list or practice formulary of medications to which their practice computer systems default when they prescribe. These formularies are often derived from regional formularies which in turn are derived from the recommendations of bodies such as the Scottish Medicines Consortium (SMC) (For more information on the SMC, visit http://www.scottishmedicines.org.uk/Home) or the National Institute for Clinical Excellence (NICE). (For more information on NICE, visit http://www.nice.org.uk)

There are 15,158 practices in the UK NHS. If half of even a small proportion of these used one medication for a particular indication and the other half used a different medication, then a cluster randomised design would produce excellent outcome data very quickly as the sheer numbers involved would allow studies of even quite rare conditions to be done. This method provides a framework for the pragmatic evaluation of the comparative effectiveness of medications (Maclure, 2009), and we have found that such designs are supported by the public (Mackenzie et al., 2012).

One such study is already under way as a pilot in the United Kingdom. This is a British Hypertension Society Research Network study titled A Randomised Policy Trial to Evaluate the Optimal Policy Diuretic for the Treatment of Hypertension. This trial seeks to formally evaluate the new proposals from NICE to change diuretic therapy for hypertension in the United Kingdom from bendroflumethiazide to chlortalidone or indapamide, (For the NICE proposal on changing diuretic therapy for hypertension in the United Kingdom, visit http://guidance.nice.org.uk) guidance which has been criticised for having a poor evidence base (Brown et al., 2012).

 

How Randomising Practice Formulary Studies Are Done

The way these studies work is that practices agree to participate and are then randomly allocated a drug (or treatment strategy) to implement. Each practice writes to all patients affected by this formulary change to tell them that the practice has decided to change (or not to change) their formulary first-choice drug for the purposes of aiding the evaluation to determine which drug is the better treatment. Patients are informed that this change will be evaluated and that their anonymised data will be used in this evaluation. Patients are offered the opportunity to opt out of the change of drug or to opt out of their anonymised data being used in the evaluation.

The pilot phase of the current diuretic study seeks to determine the workload generated to practices by writing to patients and dealing with feedback and potential opt-outs. Clearly, the level of remuneration required by practices will depend on this workload. However, the majority of general practitioners surveyed are supportive of this type of evaluation, and we believe that the majority of patients also support the NHS evaluation of drugs used in the NHS (Mackenzie et al., 2012).

Recently, the UK government has announced an initiative suggesting that everyone in the NHS should contribute to research and become research patients (For more information on the UK government’s announcement, visit http://www.bbc.co.uk/news/uk-16026827). Such an initiative will hopefully support this type of trial design, in which the NHS evaluates the medicines the NHS uses (Mackenzie et al., 2012).

 

Ethical Considerations

Ethical issues concerning cluster randomised trial designs have been debated but no clear consensus has been reached (Taljaard et al., 2009). When asked about the ethics of cluster-randomising new practice guidelines that were drawn up based on opinion versus previous guidelines, most panelists at a recent conference in Ottawa were of the view that it would be unethical not to do such an evaluation (International Consensus Conference, 2011). Such randomised policy designs and variants have long been promoted by Malcolm Maclure and others (Maclure et al., 2007). Their widespread use would enable us to know which drug-prescribing policies are good or bad. At present, we never know.

 

Evaluating New Medicines

Family practice-based cluster randomised studies could provide the framework to study the effectiveness of newly licensed therapies. Mechanisms to limit the use of novel drugs exist worldwide because of financial constraints on drug budgets. Such policies make it very difficult for manufacturers of novel therapies to collect observational, postmarketing data on safety and effectiveness. However, if half of a large group of participating practices changed their prescribing to a novel medicine from the standard therapy, then half of the population would enjoy the latest therapy at no cost to the NHS, as the pharmaceutical industry would provide such study medication free (or reimburse the NHS for its cost). Such a system provides a low-cost framework to judge the effectiveness and safety of novel therapies expeditiously. Since clinical effectiveness is the principal driver of cost-effectiveness, such a system would provide the data to support the widespread introduction (or not) of new treatments.

 

Advantages of Cluster Randomising Practice Formularies

A major advantage of cluster randomisation is that the costs and bureaucracy of doing these studies are minimal. A feature of the design of these trials means that the analyses are done using anonymised data. This means that the trial sponsor has no way of determining which patients experience serious adverse events. However, family doctors can still report such events directly to the regulatory authorities in an anonymous fashion.

Discussions have been held with the UK Medicines and Healthcare products Regulatory Agency (MHRA) as to the requirement of such studies to obtain clinical trial authorisation. The ruling at present is that the particular diuretic comparison study described above is not within the scope of the clinical trials directive as, although practices are randomised, individual patients still have the ability to determine their own treatment (M. Ward & E. Godfrey, MHRA, personal communication). It is probable that similar designs of other drug comparisons using such cluster randomisation would be regarded in the same way.

Design variants of randomising practice policies might be appropriate if, for example, the purpose of the evaluation were to determine the effectiveness of a new therapy thought to be beneficial when added to existing therapy. Designed delay studies (sometimes known as the stepped wedge design) might be judged ethically appropriate in such instances, especially where the implementation of such a new prescribing policy is limited by resource constraints. Such designs gradually introduce policy “randomly.” For example, if 100 practices were studied, a few would start with the new policy in the first month, a few in the second month, and so on, until all 100 practices had introduced the new policy. The beauty of such a system is that it produces data with excellent external validity and everyone gets the new policy over the course of the study.

New European legislation on the post-licensing risk management of novel marketed medicines will place a greater onus on manufacturers to gather postmarketing data on their products (Waller, 2011). This will stimulate better methods of data collection post-licensing and NICE and the SMC will stimulate the need for better comparative-effectiveness data.

 

Other Trial Designs

Good-quality prospective safety data can be collected directly from patients, as was shown in a recent prospective follow-up study of subjects vaccinated against the H1N1 virus (Mackenzie et al., 2011). Here, subjects being vaccinated responded to posters and volunteered to be followed up by e-mail, text message, or telephone. This system worked well and has stimulated other study designs that could be adapted. An example is the British Hypertension Society Research Network Treatment In the Morning versus Evening (TIME) study. (For more information on the TIME study, see www.demo.timestudy.co.uk for a demonstration. The full site is
www.timestudy.co.uk) This study advertises to potential participants who are willing to log on, consent, and be randomised in taking their antihypertensive medication in the morning or the evening. Patients are followed up by regular e-mails and record-linkage. One can envision a future scenario in which patients are recruited on the Internet, screened online, and agree to their physician giving assent, then are randomized, mailed medication, and followed up by e-mail, with their own physician or record-linkage providing outcome data. Investigator training in Good Clinical Practice and trial startup training can be provided by webinars to defray the cost of the usual face-to-face training. Table 1 (p. 9) summarizes the pros and cons of each of the trial designs discussed above.

 

 

Conclusion

Most of the design concepts presented here have been implemented by us at least into the pilot phase. Not everyone will agree with the ethical approach, or the robustness or feasibility of these designs, but experience will teach us how to adapt these concepts to improve the cost-effectiveness of obtaining high-quality data. We have found that the public are largely supportive of initiatives to improve the safety and effectiveness of medicines (Mackenzie et al., 2012). As a society we need to continue to think up better ways to acquire high-quality data to enhance and make health care delivery more efficient.

 


References

  1. Academy of Medical Sciences. 2011. A new pathway for the regulation and governance of health research. http://www.acmedsci.ac.uk/p47prid88.html (accessed January 16, 2012).
  2. Brown, M. J., J. K. Cruickshank, and T. M. Macdonald. 2012. Navigating the shoals in hypertension: Discovery and guidance. British Medical Journal 344:d8218. https://doi.org/10.1136/bmj.d8218.
  3. Caldwell, P. H., S. Hamilton, A. Tan, and J. C. Craig. 2010. Strategies for increasing recruitment to randomised controlled trials: Systematic review. PLoS Medicine 7(11):e1000368. https://doi.org/10.1371/journal.pmed.1000368.
  4. Dickert, N., E. Emanuel, and C. Grady. 2002. Paying research subjects: An analysis of current policies. Annals of Internal Medicine 136:368-373. https://doi.org/10.7326/0003-4819-136-5-200203050-00009.
  5. Duley, L., K. Antman, J. Arena, A. Avezum, M. Blumenthal, J. Bosch, S. Chrolavicius, T. Li, S. Ounpuu, A. C. Perez, P. Sleight, R. Svard, R. Temple, Y. Tsouderous, C. Yunis, and S. Yusuf. 2008. Specific barriers to the conduct of randomized trials. Clinical Trials 5:40-48. https://doi.org/10.1177/1740774507087704.
  6. Evans, J. M. M., and T. M. MacDonald. 1999. Record-linkage for pharmacovigilance in Scotland. British Journal of Clinical Pharmacology 47:105-110. https://doi.org/10.1046/j.1365-2125.1999.00853.x
  7. Ford, I., H. Murray, C. J. Packard, J. Shepherd, P. W. Macfarlane, S. M. Cobbe (West of Scotland Coronary Prevention Study Group). 2007. Long-term follow-up of the West of Scotland Coronary Prevention Study. New England Journal of Medicine 357:1477-1486. https://doi.org/10.1056/NEJMoa065994.
  8. Halpern, S. D., J. H. Karlawish, D. Casarett, J. A. Berlin, and D. A. Asch. 2004. Empirical assessment of whether moderate payments are undue or unjust inducements for participation in clinical trials. Archives of Internal Medicine 164:801-803. https://doi.org/10.1001/archinte.164.7.801.
  9. Hansson, L., T. Hedner, and B. Dahlof. 1992. Prospective randomised open blinded end-point (PROBE) study. A novel design for intervention trials. Blood Pressure 1:113-119. https://doi.org/10.3109/08037059209077502.
  10. International Consensus Conference to Generate Ethics Guidelines for Cluster Randomized Trials. 2011. Ottawa, Ontario, November 28-30.
  11. Kendrick, S. Undated. The Scottish record linkage system. http://www.isdscotland.org/Products-andServices/Medical-Record-Linkage/Files-for-upload/The_Scottish_Record_Linkage_ System.doc (accessed January 9, 2012).
  12. Kendrick, S. 1997. Chapter 10: The development of record linkage in Scotland. Record Linkage Techniques. www.fcsm.gov/working-papers/skendrick.pdf (accessed January 9, 2012).
  13. Macdonald, T., C. Hawkey, and I. Ford. 2010. Academic sponsorship. Time to treat as independent. British Medical Journal 341. https://doi.org/10.1136/bmj.c6837.
  14. Mackenzie, I. S., L. Wei, D. Rutherford, E. A. Findlay, W. Saywood, M. K. Campbell, and T. M. MacDonald. 2010. Promoting public understanding of randomised clinical trials using the media: The “Get Randomised” campaign. British Journal of Clinical Pharmacology 69:128-135. https://doi.org/10.1111/j.1365-2125.2009.03561.x.
  15. Mackenzie, I. S., T. M. Macdonald, S. Shakir, M. Dryburgh, B. J. Mantay, P. McDonnell, and D. Layton. 2011. Influenza H1N1 (swine flu) vaccination: A safety surveillance feasibility study using self-reporting of serious adverse events and pregnancy outcomes. British Journal of Clinical Pharmacology. https://doi.org/10.1111/j.1365-2125.2011.04142.x.
  16. Mackenzie, I. S., L. Wei, K. R. Paterson, and T. M. MacDonald. 2012. Cluster randomised trials of prescription medicines or prescribing policy—public and general practitioner opinions in Scotland. British Journal of Clinical Pharmacology. https://doi.org/10.1111/j.1365-2125.2012.04195.x.
  17. Maclure, M. 2009. Explaining pragmatic trials to pragmatic policy-makers. Canadian Medical Association Journal 180:1001-1003. https://doi.org/10.1503/cmaj.090076
  18. Maclure, M., B. Carleton, and S. Schneeweiss. 2007. Designed delays versus rigorous pragmatic trials: Lower carat gold standards can produce relevant drug evaluations. Medical Care 45(10 Suppl 2):S44-S49.
  19. Martinson, B. C., D. Lazovich, H. A. Lando, C. L. Perry, P. G. McGovern, and R. G. Boyle. 2000. Effectiveness of monetary incentives for recruiting adolescents to an intervention trial to reduce smoking. Preventive Medicine 31:706-713. https://doi.org/10.1006/pmed.2000.0762
  20. McMahon, A. D., D. I. Conway, T. M. Macdonald, and G. T. McInnes. The unintended consequences of clinical trials regulations. 2009. PLoS Medicine 3(11):e1000131. https://doi.org/10.1371/journal.pmed.1000131
  21. Morris, A. D., D. I. R. Boyle, R. McAlpine, A. Emslie-Smith, R. T. Jung, R. W. Newton, and T. M. MacDonald. 1997. The Diabetes Audit and Research in Tayside Scotland (DARTS) study: Electronic record linkage to create a diabetes register. British Medical Journal 315:524-528. https://doi.org/10.1136/bmj.315.7107.524.
  22. Reynolds, R. F., J. A. Lem, N. M. Gatto, and S. M. Eng. 2011. Is the large simple trial design used for comparative, post-approval safety research? A review of a clinical trials registry and the published literature. Drug Safety 34:799-820. https://doi.org/10.2165/11593820-000000000-00000.
  23. Steinbrook, R., and J. P. Kassirer. 2010. Data availability for industry sponsored trials: What should medical journals require? British Medical Journal 341. https://doi.org/10.1136/bmj.c5391.
  24. Taljaard, M., C. Weijer, J. M. Grimshaw, J. B. Brown, A. Binik, R. Boruch, J. C. Brehaut, S. H. Chaudhry, M. P. Eccles, A. McRae, R. Saginur, M. Zwarenstein, and A. Donner. 2009. Ethical and policy issues in cluster randomized trials: Rationale and design of a mixed methods research study. Trials 10:61. https://doi.org/10.1186/1745-6215-10-61.
  25. Treweek, S., E. Mitchell, M. Pitkethly, J. Cook, M. Kjeldstrøm, T. Taskila, M. Johansen, F. Sullivan, S. Wilson, C. Jackson, and R. Jones. Strategies to improve recruitment to randomised controlled trials. 2010. The Cochrane Database of Systematic Reviews (4):MR000013. https://doi.org/10.1002/14651858.MR000013.pub4
  26. Waller, P. 2011. Getting to grips with the new European Union pharmacovigilance legislation. Pharmacoepidemiology and Drug Safety 20:544-549. https://doi.org/10.1002/pds.2119.
  27. Ware, J. H., and M. B. Hamel. 2011. Pragmatic trials—guides to better patient care? New England Journal of Medicine 364:1685-1687. https://doi.org/10.1056/NEJMp1103502.
  28. West of Scotland Coronary Prevention Study Group. 1995. Computerised record linkage: Compared with traditional patient follow-up methods in clinical trials and illustrated in a prospective epidemiological study. Journal of Clinical Epidemiology 48:1441-1452. https://doi.org/10.1016/0895-4356(95)00530-7.

 

DOI

https://doi.org/10.31478/201205b

Suggested Citation

MacDonald, T., I. Mackenzie, and L. Wei. 2012. Novel Ways to Get Good Trial Data: The UK Experience. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC. https://doi.org/10.31478/201205b

Author Information

Tom MacDonald is with the Medical Research Institute of Dundee. Isla Mackenzie is with the Medical Research Institute of Dundee. Li Wei is with the Medical Research Institute of Dundee.

Disclaimer

The views expressed in this discussion paper are those of the authors and not necessarily of the authors’ organization or of the Institute of Medicine. The paper is intended to help inform and stimulate discussion. It has not been subjected to the review procedures of the Institute of Medicine and is not a report of the Institute of Medicine or of the National Research Council.


Join Our Community

Sign up for NAM email updates