Skip to Main Content

Yacca Library NWHHS: Step 3: Appraise

Yacca Library is the Medical Library for the North West Hospital and Health Service (NWHHS).

Why we need to apprase evidence for validity

Watch the video below.  It's a good example of why we need critical appraisal (i.e. evaluating any evidence we find) to be a regular part of our practice.

This example reminds us that

1. we can’t just accept that research will always be truthful and has been conducted in a way that removes bias.

2. we need to follow up sources of evidence which come from ‘word of mouth’. Always consult the evidence yourself.

5 essential questions to ask

Before changing your practice in the light of a published research paper, you should decide whether the methods used were valid. Let's considers five questions that should form the basis of your decision on whether to use the paper or not.

Question 1: Was the study original?

This is about asking: “Does this new research add to the current literature in any way?” Only a small proportion of medical research actually breaks entirely new ground, and an equally small proportion repeats exactly the steps of previous papers. Usually by selecting peer reviewed articles, you have covered off on answering this question. As you read more research papers, it will become apparent whether or not the article actually contributes anything new.

Question 2: Whom is the study about?

Before assuming that the results of a paper are applicable to your own practice, ask yourself the following questions:

Who was included in the study and who was excluded from the study? For instance, the results of pharmacokinetic studies of new drugs in 23-year-old healthy male volunteers will clearly not be applicable to the average elderly female! Consider if your patients are of similar age and have similar conditions to the participants in the study.

Were the participants studied in ‘real-life’ circumstances? For example, were they all admitted to hospital purely for observation? Did they all receive lengthy and detailed explanations of the potential benefits of the intervention? Were they given the telephone number of a key research worker? Did the company who funded the research provide new equipment that would not be available to the ordinary clinician? These factors would not invalidate the study, but they may cast doubts on the applicability of its findings to your own practice.

Question 3: Was the design of the study sensible

This is one of the most fundamental questions in appraising any paper. It is tempting to take published statements at face value, but remember that authors frequently misrepresent (usually subconsciously rather than deliberately) what they actually did, and overestimate its originality and potential importance. For example, a statement in an article might read “we measured how often GPs ask patients whether they smoke”. What the authors should have said is “We looked in patients medical records and counted how many had has their smoking status recorded.” This issue is that they have made an assumption that medical records are 100% accurate.

Question 4: Was systematic bias avoided or minimised

Whether the design of a study is an randomised control trial (RCT), a non-randomised comparative trial, a cohort study or a case–control study, the aim should be for the groups being compared to be as like one another as possible except for the particular difference being examined. They should, as far as possible, receive the same explanations, have the same contacts with health professionals, and be assessed the same number of times by the same assessors, using the same outcome measures. Different study designs call for different steps to reduce systematic bias.

Question 5: Was the study large enough, and continued for long enough, to make the results credible?

Look for the sample size, generally referred to as N, a bigger number is almost always better. But take into consideration the type of study too.

Duration to follow up – this is where researchers checked back with participants. The completeness of follow-up is an important determinant of the validity of a study. Clinical studies are expected to consider the course of all participants up to the “study end”. Study findings should be based on complete follow-up information, but in reality, it may be impracticable to follow every single study participant exactly to the study end date. Therefore, studies should declare at least how complete their follow-up was, otherwise their validity cannot be judged.

Drop out - Data are often collected repeatedly from clinical research subjects over months or years in order to track disease progress, detect the onset of new problems, or assess the effect of treatments. Unlike experimental animals, human subjects often choose to drop out of the study before its completion, rendering the resulting data incomplete. Failure in the analysis to appropriately account for subjects dropping out of the study for reasons unknown to the researchers can result in biased conclusions.

Content for this section has been taken from the following sources

• Carlson, N. (2010). Accounting for participant dropout in clinical Studies. Science Translational Medicine, 2(16), e14. doi:10.1126/scitranslmed.3000856.
• Greenhalgh, Trisha. (2014). How To : How to Read a Paper: the Basics of Evidence-Based Medicine. Retrieved from ProQuest Ebook Central.
• von Allmen, R. S., Weiss, S., Tevaearai, H. T., Kuemmerli, C., Tinner, C., Carrel, T. P., Schmidli, J., … Dick, F. (2015). Completeness of follow-up determines validity of study findings: Results of a prospective repeated measures cohort study. PloS one, 10(10), e0140817. doi:10.1371/journal.pone.0140817

Is it scholarly?

Let’s start with the basics. Is the source you’re looking at scholarly or not? In evidence based practice you’re generally going to want scholarly sources only. While you might find some interesting information in popular sources, you’re not going to want to implement changes to your practice based on what you find in Men's Health magazine for example.

Watch the video below to learn more about the difference between scholarly and non-scholarly sources of information.

Is it peer reviewed?

Peer review is the assessment/review of a new piece of literature undertaken by experts (“peers”) in that particular field.

The peer review process is a quality assurance mechanism to ensure that the piece of literature is

  • Rigorous
  • Coherent
  • Uses past research
  • Adds to what is already known about the topic

Literature that has passed the tick of approval by being successfully peer reviewed means that readers know the work has undergone rigorous scrutiny by professors and researchers who are experts of that topic. Therefore we are assured the work is of quality and useful to that field of research. Thus articles that are peer reviewed are seen as higher quality than those that are not.

You can select ‘peer review’ filter in CKN and most research databases.

Do you remember the hierarchy of evidence from earlier? It will help you to make a decision about the quality of the evidence you have found depending upon how high it is on the pyramid.

Glover, Jan; Izzo, David; Odato, Karen & Lei Wang. EBM Pyramid.  Dartmouth University/Yale University. 2006.

Critical Appraisal Tools

Can we talk about CATs?

Critical appraisal can occur either with a non-structured approach whereby you evaluate the study as you read it, or through a structured approach with the use of a Critical Appraisal Tool (CAT).

CATs are structured checklists that allow you to assess the methodological quality of a study against a set of criteria. An advantage of using a CAT is that you can apply a level of consistency when reviewing a number of studies.

Click the link below. Choose one study design and explore a CAT to see how you could use one when you come across an article.

http://www.unisa.edu.au/Research/Health-Research/Research/Allied-Health-Evidence/Resources/CAT/

Still unsure about something, or want to know more? Chat with the health librarian for support with developing your question, finding evidence and assessing its quality.