00
Correct
00
Incorrect
00 : 00 : 0 00
Session Time
00 : 00
Average Question Time ( Mins)
  • Question 1 - What statistical test would be appropriate to compare the mean blood pressure measurements...

    Incorrect

    • What statistical test would be appropriate to compare the mean blood pressure measurements of a group of individuals before and after exercise?

      Your Answer: Wilcoxon's rank sum test

      Correct Answer: Paired t-test

      Explanation:

      Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      19.7
      Seconds
  • Question 2 - How can the pre-test probability be expressed in another way? ...

    Incorrect

    • How can the pre-test probability be expressed in another way?

      Your Answer: The incidence of a condition

      Correct Answer: The prevalence of a condition

      Explanation:

      The prevalence refers to the percentage of individuals in a population who currently have a particular condition, while the incidence is the frequency at which new cases of the condition arise within a specific timeframe.

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      63.2
      Seconds
  • Question 3 - Which term is used to refer to the alternative hypothesis in hypothesis testing?...

    Correct

    • Which term is used to refer to the alternative hypothesis in hypothesis testing?

      a) Research hypothesis
      b) Statistical hypothesis
      c) Simple hypothesis
      d) Null hypothesis
      e) Composite hypothesis

      Your Answer: Research hypothesis

      Explanation:

      Understanding Hypothesis Testing in Statistics

      In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.

      The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.

      Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.

      P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      146.5
      Seconds
  • Question 4 - Which of the following resources has been filtered? ...

    Incorrect

    • Which of the following resources has been filtered?

      Your Answer: Ovid MEDLINE

      Correct Answer: DARE

      Explanation:

      The main focus of the Database of Abstracts of Reviews of Effect (DARE) is on systematic reviews that assess the impact of healthcare interventions and the management and provision of healthcare services. In order to be considered for inclusion, reviews must satisfy several requirements.

      Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.

      When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.

      There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      17.5
      Seconds
  • Question 5 - An endocrinologist conducts a study to determine if there is a correlation between...

    Incorrect

    • An endocrinologist conducts a study to determine if there is a correlation between a patient's age and their blood pressure. Assuming both age and blood pressure are normally distributed, what statistical test would be most suitable to use?

      Your Answer: Spearman's rank correlation coefficient

      Correct Answer: Pearson's product-moment coefficient

      Explanation:

      Since the data is normally distributed and the study aims to evaluate the correlation between two variables, the most suitable test to use is Pearson’s product-moment coefficient. On the other hand, if the data is non-parametric, Spearman’s coefficient would be more appropriate.

      Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      24.6
      Seconds
  • Question 6 - A new treatment for elderly patients with hypertension is investigated. The study looks...

    Incorrect

    • A new treatment for elderly patients with hypertension is investigated. The study looks at the incidence of stroke after 1 year. The following data is obtained:
      Number who had a stroke vs Number without a stroke
      New drug: 40 vs 160
      Placebo: 100 vs 300
      What is the relative risk reduction?

      Your Answer: 0.8

      Correct Answer: 20%

      Explanation:

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      113
      Seconds
  • Question 7 - What is the standardized score (z-score) for a woman whose haemoglobin concentration is...

    Correct

    • What is the standardized score (z-score) for a woman whose haemoglobin concentration is 150 g/L, given that the mean haemoglobin concentration for healthy women is 135 g/L and the standard deviation is 15 g/L?

      Your Answer: 1

      Explanation:

      Z Scores: A Special Application of Transformation Rules

      Z scores are a unique way of measuring how much and in which direction an item deviates from the mean of its distribution, expressed in units of its standard deviation. To calculate the z score for an observation x from a population with mean and standard deviation, we use the formula z = (x – mean) / standard deviation. For example, if our observation is 150 and the mean and standard deviation are 135 and 15, respectively, then the z score would be 1.0. Z scores are a useful tool for comparing observations from different distributions and for identifying outliers.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      96.6
      Seconds
  • Question 8 - How can it be determined if the study on the effectiveness of a...

    Incorrect

    • How can it be determined if the study on the effectiveness of a new oral treatment for schizophrenia patients in preventing hospital admissions has yielded statistically significant results?

      Your Answer: p-value < 0.5

      Correct Answer:

      Explanation:

      Understanding Hypothesis Testing in Statistics

      In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.

      The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.

      Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.

      P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      137.4
      Seconds
  • Question 9 - What is the term used to describe the rate at which new cases...

    Correct

    • What is the term used to describe the rate at which new cases of a disease are appearing, calculated by dividing the number of new cases by the total time that disease-free individuals are observed during a study period?

      Your Answer: Incidence rate

      Explanation:

      Measures of Disease Frequency: Incidence and Prevalence

      Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.

      Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.

      It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.

      Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      24.5
      Seconds
  • Question 10 - What type of data representation is used in a box and whisker plot?...

    Incorrect

    • What type of data representation is used in a box and whisker plot?

      Your Answer: Confidence interval

      Correct Answer: Median

      Explanation:

      Box and whisker plots are a useful tool for displaying information about the range, median, and quartiles of a data set. The whiskers only contain values within 1.5 times the interquartile range (IQR), and any values outside of this range are considered outliers and displayed as dots. The IQR is the difference between the 3rd and 1st quartiles, which divide the data set into quarters. Quartiles can also be used to determine the percentage of observations that fall below a certain value. However, quartiles and ranges have limitations because they do not take into account every score in a data set. To get a more representative idea of spread, measures such as variance and standard deviation are needed. Box plots can also provide information about the shape of a data set, such as whether it is skewed or symmetric. Notched boxes on the plot represent the confidence intervals of the median values.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      32.4
      Seconds
  • Question 11 - What study method would be most suitable for a researcher tasked with comparing...

    Incorrect

    • What study method would be most suitable for a researcher tasked with comparing the cost-effectiveness of olanzapine and haloperidol in reducing symptom severity of schizophrenia, as measured by the Positive and Negative Syndrome Scale?

      Your Answer: Cost-benefit analysis

      Correct Answer: Cost-effectiveness analysis

      Explanation:

      The task assigned to the researcher is to conduct a cost-effectiveness analysis, which involves comparing two interventions based on their costs and their impact on a single clinical measure of effectiveness, specifically the reduction in symptom severity as measured by the PANSS.

      Methods of Economic Evaluation

      There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.

      Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.

      Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.

      Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.

      Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.

      Costs in Economic Evaluation Studies

      There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      55.3
      Seconds
  • Question 12 - Six men in a study on the sleep inducing effects of melatonin are...

    Correct

    • Six men in a study on the sleep inducing effects of melatonin are aged 52, 55, 56, 58, 59, and 92. What is the median age of the men included in the study?

      Your Answer: 57

      Explanation:

      – The median is the point with half the values above and half below.
      – In the given data set, there are an even number of values.
      – The median value is halfway between the two middle values.
      – The middle values are 56 and 58.
      – Therefore, the median is (56 + 58) / 2.

      Measures of Central Tendency

      Measures of central tendency are used in descriptive statistics to summarize the middle of typical value of a data set. There are three common measures of central tendency: the mean, median, and mode.

      The median is the middle value in a data set that has been arranged in numerical order. It is not affected by outliers and is used for ordinal data. The mode is the most frequent value in a data set and is used for categorical data. The mean is calculated by adding all the values in a data set and dividing by the number of values. It is sensitive to outliers and is used for interval and ratio data.

      The appropriate measure of central tendency depends on the measurement scale of the data. For nominal and categorical data, the mode is used. For ordinal data, the median of mode is used. For interval data with a normal distribution, the mean is preferable, but the median of mode can also be used. For interval data with skewed distribution, the median is used. For ratio data, the mean is preferable, but the median of mode can also be used for skewed data.

      In addition to measures of central tendency, the range is also used to describe the spread of a data set. It is calculated by subtracting the smallest value from the largest value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      41.5
      Seconds
  • Question 13 - What is the term used to describe a scenario where a study participant...

    Correct

    • What is the term used to describe a scenario where a study participant alters their behavior due to the awareness of being observed?

      Your Answer: Hawthorne effect

      Explanation:

      Simpson’s Paradox is a real phenomenon where the comparison of association between variables can change direction when data from multiple groups are merged into one. The other three options are not valid terms.

      Types of Bias in Statistics

      Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.

      There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      32.1
      Seconds
  • Question 14 - How does the prevalence of a condition impact a particular aspect? ...

    Correct

    • How does the prevalence of a condition impact a particular aspect?

      Your Answer: Positive predictive value

      Explanation:

      The characteristics of precision, sensitivity, accuracy, and specificity are not influenced by the prevalence of the condition and remain stable. However, the positive predictive value is affected by the prevalence of the condition, particularly in cases where the prevalence is low. This is because a decrease in the prevalence of the condition leads to a decrease in the number of true positives, which in turn reduces the numerator of the PPV equation, resulting in a lower PPV. The formula for PPV is TP/(TP+FP).

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      18.9
      Seconds
  • Question 15 - A new screening test is developed for Alzheimer's disease. It is a cognitive...

    Correct

    • A new screening test is developed for Alzheimer's disease. It is a cognitive test which measures memory; the lower the score, the more likely a patient is to have the condition. If the cut-off for a positive test is increased, which one of the following will also be increased?

      Your Answer: Specificity

      Explanation:

      Raising the threshold for a positive test outcome will result in a reduction in the number of incorrect positive results, leading to an improvement in specificity.

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      387.6
      Seconds
  • Question 16 - Which of the following statements accurately describes relative risk? ...

    Incorrect

    • Which of the following statements accurately describes relative risk?

      Your Answer: Relative risk = 1 / odds ratio

      Correct Answer: It is the usual outcome measure of cohort studies

      Explanation:

      The relative risk is the typical measure of outcome in cohort studies. It is important to distinguish between risk and odds. For example, if 20 individuals out of 100 who take an overdose die, the risk of dying is 0.2, while the odds are 0.25 (20/80).

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      89.9
      Seconds
  • Question 17 - How can the negative predictive value of a screening test be calculated accurately?...

    Correct

    • How can the negative predictive value of a screening test be calculated accurately?

      Your Answer: TN / (TN + FN)

      Explanation:

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      24.9
      Seconds
  • Question 18 - What is the appropriate denominator for calculating cumulative incidence? ...

    Incorrect

    • What is the appropriate denominator for calculating cumulative incidence?

      Your Answer: The total number of people in a population

      Correct Answer: The number of disease free people at the beginning of a specified time period

      Explanation:

      Measures of Disease Frequency: Incidence and Prevalence

      Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.

      Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.

      It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.

      Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      492.6
      Seconds
  • Question 19 - A researcher wants to compare the mean age of two groups of participants...

    Incorrect

    • A researcher wants to compare the mean age of two groups of participants who were randomly assigned to either a standard exercise program of a standard exercise program + new supplement. The data collected is parametric and continuous. What is the most appropriate statistical test to use?

      Your Answer: Paired t test

      Correct Answer: Unpaired t test

      Explanation:

      The two sample unpaired t test is utilized to examine whether the null hypothesis that the two populations related to the two random samples are equivalent is true of not. When dealing with continuous data that is believed to conform to the normal distribution, a t test is suitable, making it appropriate for comparing weight loss between two groups. In contrast, a paired t test is used when the data is dependent, meaning there is a direct correlation between the values in the two samples. This could include the same subject being measured before and after a process change of at different times.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      145.6
      Seconds
  • Question 20 - As the occurrence of a condition decreases, what increases? ...

    Correct

    • As the occurrence of a condition decreases, what increases?

      Your Answer: Negative predictive value

      Explanation:

      The prevalence of a condition has an impact on both the PPV and NPV. When the prevalence decreases, the PPV also decreases while the NPV increases.

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      33.6
      Seconds
  • Question 21 - What is the calculation that the nurse performed to determine the patient's average...

    Incorrect

    • What is the calculation that the nurse performed to determine the patient's average daily calorie intake over a seven day period?

      Your Answer: Generalised mean

      Correct Answer: Arithmetic mean

      Explanation:

      You don’t need to concern yourself with the specifics of the various means. Simply keep in mind that the arithmetic mean is the one utilized in fundamental biostatistics.

      Measures of Central Tendency

      Measures of central tendency are used in descriptive statistics to summarize the middle of typical value of a data set. There are three common measures of central tendency: the mean, median, and mode.

      The median is the middle value in a data set that has been arranged in numerical order. It is not affected by outliers and is used for ordinal data. The mode is the most frequent value in a data set and is used for categorical data. The mean is calculated by adding all the values in a data set and dividing by the number of values. It is sensitive to outliers and is used for interval and ratio data.

      The appropriate measure of central tendency depends on the measurement scale of the data. For nominal and categorical data, the mode is used. For ordinal data, the median of mode is used. For interval data with a normal distribution, the mean is preferable, but the median of mode can also be used. For interval data with skewed distribution, the median is used. For ratio data, the mean is preferable, but the median of mode can also be used for skewed data.

      In addition to measures of central tendency, the range is also used to describe the spread of a data set. It is calculated by subtracting the smallest value from the largest value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      6134.2
      Seconds
  • Question 22 - What is the accurate formula for determining the likelihood ratio of a negative...

    Incorrect

    • What is the accurate formula for determining the likelihood ratio of a negative test result?

      Your Answer: Specificity / (1 - sensitivity)

      Correct Answer: (1 - sensitivity) / specificity

      Explanation:

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      66.4
      Seconds
  • Question 23 - A team of scientists aims to perform a systematic review and meta-analysis of...

    Incorrect

    • A team of scientists aims to perform a systematic review and meta-analysis of the effects of caffeine on sleep quality. They want to determine if there is any variation in the results across the studies they have gathered.
      Which of the following is not a technique that can be employed to evaluate heterogeneity?

      Your Answer: Chi-square test

      Correct Answer: Receiver operating characteristic curve

      Explanation:

      The receiver operating characteristic (ROC) curve is a useful tool for evaluating the diagnostic accuracy of a test in distinguishing between healthy and diseased individuals. It helps to identify the optimal cut-off point between sensitivity and specificity.

      Other methods, such as visual inspection of forest plots and Cochran’s Q test, can be used to assess heterogeneity in meta-analysis. Visual inspection of forest plots is a quick and easy method, while Cochran’s Q test is a more formal and widely accepted approach.

      For more information on heterogeneity in meta-analysis, further reading is recommended.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      28.9
      Seconds
  • Question 24 - Which option is not a type of descriptive statistic? ...

    Correct

    • Which option is not a type of descriptive statistic?

      Your Answer: Student's t-test

      Explanation:

      A t-test is a statistical method used to determine if there is a significant difference between the means of two groups. It is a type of statistical inference.

      Types of Statistics: Descriptive and Inferential

      Statistics can be divided into two categories: descriptive and inferential. Descriptive statistics are used to describe and summarize data without making any generalizations beyond the data at hand. On the other hand, inferential statistics are used to make inferences about a population based on sample data.

      Descriptive statistics are useful for identifying patterns and trends in data. Common measures used to describe a data set include measures of central tendency (such as the mean, median, and mode) and measures of variability of dispersion (such as the standard deviation of variance).

      Inferential statistics, on the other hand, are used to make predictions of draw conclusions about a population based on sample data. These statistics are also used to determine the probability that observed differences between groups are reliable and not due to chance.

      Overall, both descriptive and inferential statistics play important roles in analyzing and interpreting data. Descriptive statistics help us understand the characteristics of a data set, while inferential statistics allow us to make predictions and draw conclusions about larger populations.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      50
      Seconds
  • Question 25 - What condition would make it inappropriate to use the Student's t-test for conducting...

    Correct

    • What condition would make it inappropriate to use the Student's t-test for conducting a significance test?

      Your Answer: Using it with data that is not normally distributed

      Explanation:

      T-tests are appropriate for parametric data, which means that the data should conform to a normal distribution.

      Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      61.7
      Seconds
  • Question 26 - If the new antihypertensive therapy is implemented for the secondary prevention of stroke,...

    Correct

    • If the new antihypertensive therapy is implemented for the secondary prevention of stroke, it would result in an absolute annual risk reduction of 0.5% compared to conventional therapy. However, the cost of the new treatment is £100 more per patient per year. Therefore, what would the cost of implementing the new therapy for each stroke prevented?

      Your Answer: £20,000

      Explanation:

      The new drug reduces the annual incidence of stroke by 0.5% and costs £100 more than conventional therapy. This means that for every 200 patients treated, one stroke would be prevented with the new drug compared to conventional therapy. The Number Needed to Treat (NNT) is 200 per year to prevent one stroke. Therefore, the annual cost of this treatment to prevent one stroke would be £20,000, which is the cost of treating 200 patients with the new drug (£100 x 200).

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      102.4
      Seconds
  • Question 27 - A study examines the benefits of adding an intensive package of dialectic behavioural...

    Correct

    • A study examines the benefits of adding an intensive package of dialectic behavioural therapy (DBT) to standard care following an episode of serious self-harm in adolescents. The following results are obtained:
      Percentage of adolescents having a further episode
      of serious self harm within 3 months
      Standard care 4%
      Standard care and intensive DBT 3%
      What is the number needed to treat to prevent one adolescent having a further episode of serious self harm within 3 months?

      Your Answer: 100

      Explanation:

      The number needed to treat (NNT) is equal to 100. This means that for every 100 patients treated, one patient will benefit from the treatment. The absolute risk reduction (ARR) is 0.01, which is the difference between the control event rate (CER) of 0.04 and the experimental event rate (EER) of 0.03.

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      118.4
      Seconds
  • Question 28 - Which of the following statistical measures does not indicate the spread of variability...

    Incorrect

    • Which of the following statistical measures does not indicate the spread of variability of data?

      Your Answer: Interquartile range

      Correct Answer: Mean

      Explanation:

      The mean, mode, and median are all measures of central tendency.

      Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      25.9
      Seconds
  • Question 29 - The researcher conducted a study to test his hypothesis that a new drug...

    Incorrect

    • The researcher conducted a study to test his hypothesis that a new drug would effectively treat depression. The results of the study indicated that his hypothesis was true, but in reality, it was not. What happened?

      Your Answer: Type II error

      Correct Answer: Type I error

      Explanation:

      Type I errors occur when we reject a null hypothesis that is actually true, leading us to believe that there is a significant difference of effect when there is not.

      Understanding Hypothesis Testing in Statistics

      In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.

      The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.

      Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.

      P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      121.2
      Seconds
  • Question 30 - What is the term used to describe the study design where a margin...

    Correct

    • What is the term used to describe the study design where a margin is set for the mean reduction of PANSS score, and if the confidence interval of the difference between the new drug and olanzapine falls within this margin, the trial is considered successful?

      Your Answer: Equivalence trial

      Explanation:

      Study Designs for New Drugs: Options and Considerations

      When launching a new drug, there are various study design options available. One common approach is a placebo-controlled trial, which can provide strong evidence but may be deemed unethical if established treatments are available. Additionally, it does not allow for a comparison with standard treatments. Therefore, statisticians must decide whether the trial aims to demonstrate superiority, equivalence, of non-inferiority to an existing treatment.

      Superiority trials may seem like the obvious choice, but they require a large sample size to show a significant benefit over an existing treatment. Equivalence trials define an equivalence margin on a specified outcome, and if the confidence interval of the difference between the two drugs falls within this margin, the drugs are assumed to have a similar effect. Non-inferiority trials are similar to equivalence trials, but only the lower confidence interval needs to fall within the equivalence margin. These trials require smaller sample sizes, and once a drug has been shown to be non-inferior, larger studies may be conducted to demonstrate superiority.

      It is important to note that drug companies may not necessarily aim to show superiority over an existing product. If they can demonstrate that their product is equivalent of even non-inferior, they may compete on price of convenience. Overall, the choice of study design depends on various factors, including ethical considerations, sample size, and the desired outcome.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      106.4
      Seconds

SESSION STATS - PERFORMANCE PER SPECIALTY

Research Methods, Statistics, Critical Review And Evidence-Based Practice (14/30) 47%
Passmed