Understanding Scientific Research
23rd Oct, 2019

This post is specifically for health practitioners. Create a free account

Alternatively, view general public info.

Learn more about vital.ly

Understanding Scientific Research

Understanding Scientific Research

Determining the best available evidence to support clinical treatment decisions relies on a knowledge of study design and study quality (1). Since the time of Hippocrates, medicine has struggled to balance the uncontrolled experience of healers with observations obtained by rigorous investigation of claims regarding the effects of health interventions (2).

Not all research is of sufficient quality to inform clinical decision making, therefore critical appraisal of the evidence is required by considering these major aspects (3):

  • Study design – Is the study design appropriate to the outcomes measured?
  • Study quality/validity i.e. detailed study methods and execution – Can the research be trusted?
  • Consistency – Is there a similarity of effect across studies?
  • Impact – Are the results clinically important?
  • Applicability – Can the result be applied to your patient?
  • Bias – Is there a risk of bias, including publication bias, confirmation bias, plausibility bias, expectation bias, commercial bias (who sponsored the study?)

 

Study designs

Available therapeutic literature can be broadly categorised as those studies of an observational nature and those studies that have a randomised experimental design (3).

The study design is critical in informing the reader about the relevance of the study to the question being addressed. The various research methods are described in Tables 1 - 3.

 

Table 1. Observational Study Designs (411)

Scientific Research - Table 1

 

Table 2. Experimental/Intervention Study Designs (411)

Understanding Scientific Research - Table 2

 

Table 3. Reviews (412)

Understanding Scientific Research - Table 3

Hierarchy/Grade of Evidence

The first and earliest principle of evidence-based medicine indicates that not all evidence is the same, and a hierarchy of evidence exists (Figure 1) (13). Study designs in ascending levels of the pyramid generally exhibit increased quality of evidence and reduced risk of bias. Confidence in causal relations increases at the upper levels (14).

Another system of indicating study strength is the GRADE system (Table 4). GRADE provides a much more sophisticated hierarchy of evidence, which addresses all elements related to the credibility of the evidence: study design, risk of bias (study strengths and limitations), precision, consistency (variability in results between studies), directness (applicability), publication bias, magnitude of effect, and dose-response gradients (2).

Studies are rated as level A (High), level B (Moderate), level C (Low) and level D (Very Low). An initial Grade is applied based on the study design (as per the hierarchy pyramid) but is then adjusted up or down based on individual qualities of the study (1).

 

Figure 1. Hierarchy of evidence pyramid. Reproduced from (15) under the CC BY-NC-ND 3.0

Understanding Scientific Research - Fig 1

 

Table 4. Quality of evidence: The GRADE system (1619)

Understanding Scientific Research - Table 4

 

The best available evidence may not come from the optimal study type. For example, if treatment effects found in well-designed cohort studies are sufficiently large and consistent, this may provide more convincing evidence than the findings of a weaker RCT (5).

 

The P.I.C.O. Model

Determining the quality of evidence across studies depends on having a clearly defined question that is to be answered and considering all of the outcomes that are important to the patient/population group (3). Without a well-focused question, it can be very difficult and time-consuming to identify appropriate resources and search for relevant evidence. Practitioners of Evidence-Based Practice (EBP) often use a specialised framework called P.I.C.O. to define a clinical question in terms of the specific patient problem and facilitate the literature search (Table 5) (5).

 

Table 5. The P.I.C.O model for clinical questions (5)

Understanding Scientific Research - Table 5

 Other factors to consider when forming the clinical question:

  • What type of question are you asking? i.e. diagnosis, aetiology, therapy, prognosis, or prevention? (Table 6)
  • What would be the best study design/methodology needed to answer the research question? (Table 7)

 

Table 6. Ways in which P.I.C.O. varies according to the type of question being asked (5)

Understanding Scientific Research - Table 6

 

 Table 7. Optimal study methodologies for the main types of research questions (5)

Understanding Scientific Research - Table 7

Once the main elements of the question have been identified using the P.I.C.O. framework, it is easy to write a question statement to guide the literature search. Table 8 provides some examples.

 

Table 8. Examples of P.I.C.O. question statements (5)

Understanding Scientific Research - Table 8

Evaluating the literature

While conducting clinical research, errors can be introduced voluntarily or involuntarily at a number of stages, such as design, population selection, calculating the number of samples, non-compliance with the study protocol, data entry and selection of the statistical method. The rigour with which a study is conducted plays a role in how reliable the results may be (20,21). Not all case-control, cohort or randomised studies are done to the same standards and thus, if done multiple times, may have different results both due to chance or due to confounding variables and biases (1). Each study should be assessed according to the criteria in Table 9.

 

Bias and randomisation

Bias occurs when the researcher consciously or unconsciously influences the results in order to portray a certain outcome in line with their own decisions, views and ideological preferences (22). Types of bias include confirmation bias, plausibility bias, expectation bias, commercial bias.

RCTs have within them, by the nature of randomisation, an ability to help control bias (23). Bias can confound the outcome of a study such that the study may over or underestimate what the true treatment effect is. Studies of a more observational nature are by design more prone to bias than a randomised trial (1).

It is first necessary to determine how the randomisation was done. The most important concepts of randomisation are that allocation is concealed and that the allocation is truly random. If it is known to which group a patient will be randomised it may be possible to potentially influence their allocation (1).

 

Table 9. Criteria for evaluating scientific literature (1,4,5,24)

Understanding Scientific Research - Table 9

Applying the evidence

Applying the best evidence in practice, i.e. Evidence-Based Practice (EBP) requires a great deal of skill in synthesising the best scientific knowledge with clinical expertise and the patient's unique values and circumstances to reach a clinical decision (Figure 2).

 

Figure 2. Using clinical reasoning to integrating information in evidence-based practice (5)

Understanding Scientific Research - Fig 2 

Resources

Below are some of the commonly used online resources available for searching scientific literature. Some resources require paid subscriptions and others are open to the public.

The Cochrane Collaboration is an international voluntary organisation that prepares, maintains and promotes the accessibility of systematic reviews of the effects of healthcare. 

The Cochrane Library is a database from the Cochrane Collaboration that allows simultaneous searching of six evidence-based practice databases. Cochrane Reviews are systematic reviews authored by members of the Cochrane Collaboration and available via The Cochrane Database of Systematic Reviews. They are widely recognised as the gold standard in systematic reviews due to the rigorous methodology used (5). 

 

Takeaway on Understanding Scientific Research

  • Having a thorough understanding of the different types of research and how to critically appraise each one is crucial for making good evidence-based clinical decisions.
  • Identifying what specific question is to be answered by using a framework such as P.I.C.O., helps to target the literature search to find evidence which is applicable to your clinical question.

 

Loading...
References
1Petrisor B, Bhandari M. The hierarchy of evidence: Levels and grades of recommendation. Indian J Orthop. 2007;41(1):11–5.
2Djulbegovic B, Guyatt GH. Progress in evidence-based medicine: a quarter century on. The Lancet. 2017 Jul;390(10092):415–23.
3Atkins D, Best D, Briss P, Eccles M, Falck-Ytter Y, Flottorp S, et al. Grading quality of evidence and strength of recommendations. BMJ. 2004 Jun 19;328(7454):1490–4.
4Çaparlar CÖ, Dönmez A. What is Scientific Research and How Can it be Done? Turk J Anaesthesiol Reanim. 2016 Aug;44(4):212–8.
5Turner M. Evidence-Based Practice in Health [Internet]. University of Canberra Library. 2014 [cited 2018 Sep 30]. Available from: https://canberra.libguides.com/evidence
6Riley D. Case Reports in the Era of Clinical Trials. Global Advances in Health and Medicine. 2013 Mar;2(2):10–1.
7Zeilstra D, Younes JA, Brummer RJ, Kleerebezem M. Perspective: Fundamental Limitations of the Randomized Controlled Trial Method in Nutritional Research: The Example of Probiotics. Adv Nutr. 2018 Sep;9(5):561–71.
8Demeyin WA, Frost J, Ukoumunne OC, Briscoe S, Britten N. N of 1 trials and the optimal individualisation of drug treatments: a systematic review protocol. Systematic Reviews. 2017 Dec;6(1).
9Guyatt GH, Haynes RB, Jaeschke RZ, Cook DJ, Green L, Naylor CD, et al. Users’ Guides to the Medical Literature: XXV. Evidence-based medicine: principles for applying the Users’ Guides to patient care. Evidence-Based Medicine Working Group. JAMA. 2000 Sep 13;284(10):1290–6.
10Oxford Centre for Evidence-Based Medicine. OCEBM Levels of Evidence Working Group [Internet]. CEBM. 2016 [cited 2018 Oct 10]. Available from: https://www.cebm.net/2016/05/ocebm-levels-of-evidence/
11Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997 Mar 1;126(5):376–80.
12Seidler AL, Hunter KE, Cheyne S, Ghersi D, Berlin JA, Askie L. A guide to prospective meta-analysis. BMJ. 2019 Oct 9;367:l5342.
13Murad MH, Asi N, Alsawas M, Alahdab F. New evidence pyramid. Evidence Based Medicine. 2016 Aug;21(4):125–7.
14Yetley EA, MacFarlane AJ, Greene-Finestone LS, Garza C, Ard JD, Atkinson SA, et al. Options for basing Dietary Reference Intakes (DRIs) on chronic disease endpoints: report from a joint US-/Canadian-sponsored working group. The American Journal of Clinical Nutrition. 2017 Jan;105(1):249S-285S.
15Puro A, The Himmelfarb Health Sciences Library. Research Guides: Evidence Based Medicine: Types of Studies [Internet]. Levels of Evidence Pyramid. 2014 [cited 2018 Oct 10]. Available from: //libguides.gwumc.edu/ebm/studytypes
16Takada T, Strasberg SM, Solomkin JS, Pitt HA, Gomi H, Yoshida M, et al. TG13: Updated Tokyo Guidelines for the management of acute cholangitis and cholecystitis. Journal of Hepato-Biliary-Pancreatic Sciences. 2013 Jan;20(1):1–7.
17Atkins D, Eccles M, Flottorp S, Guyatt GH, Henry D, Hill S, et al. Systems for grading the quality of evidence and the strength of recommendations I: critical appraisal of existing approaches The GRADE Working Group. BMC Health Serv Res. 2004 Dec 22;4(1):38.
18Guyatt GH, Oxman AD, Kunz R, Falck-Ytter Y, Vist GE, Liberati A, et al. Going from evidence to recommendations. BMJ. 2008 May 10;336(7652):1049–51.
19Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schünemann HJ. What is “quality of evidence” and why is it important to clinicians? BMJ. 2008 May 3;336(7651):995–8.
20Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials. 1996 Feb;17(1):1–12.
21Kunz R, Oxman AD. The unpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials. BMJ. 1998 Oct 31;317(7167):1185–90.
22Gluud LL. Bias in clinical intervention research. Am J Epidemiol. 2006 Mar 15;163(6):493–501.
23Schulz KF, Grimes DA. Generation of allocation sequences in randomised trials: chance, not choice. Lancet. 2002 Feb 9;359(9305):515–9.
24University of Illinois Chicago. Journal Impact Factor (IF) [Internet]. 2018 [cited 2018 Oct 3]. Available from: https://researchguides.uic.edu/if/impact